Blockchain, Bitcoin, Cryptocurrencies and ICOs – Everything I Wanted To Know

To understand blockchain and cryptocurrencies like Bitcoin, it needs us to understand stocks and markets, currency and it’s working, and a fair amount of technology, not to mention monetary policy. Consequently, very few people truly understand cryptocurrencies and bitcoin. Right now though there’s a feeding frenzy going on. Remember, when adding a .com to your company name increased your market value? Long Island Iced Tea Corp – which actually makes iced tea recently announced that they are changing their name to Blockchain Corp and their stock surged on the news. I started this post a while back and it kept growing. This is now, as the title suggests, everything I wanted to know about blockchain, Bitcoin, cryptocurrencies and ICOs.

What is Blockchain?

How do we know a transaction has been completed? Where do we keep these records? The history of accounting pre-dates the evolution of money – with the advent or writing and numeracy, records were kept way back in the Mesopotanian and Egyptian civilisations, but it was only much later in the 15th century, under the Mediccis that double entry accounting came into being. This dual entry system provided the bulwark of accounting and transaction records for over the next 500 years. The double entry system was reliable but not foolproof – you could go back and change the records and alter the ownership of assets, or erase records of transactions. Also usually the caretaker of financial transactions was the banking system, or another nominated institution. Which was both a blessing and a curse because it created a single point of both ownership and failure. Blockchain is an entirely new way of capturing transactions that goes from double entry to a ‘multiple entry’ system. How do we explain blockchain?

Here are a few excellent explanations of blockchain:

The Economist: a system that lets strangers transact using a dependable ledger

Colin Thompson has a great series explaining blockchain – here’s Part 1

And Part 2, Part 3, and Part 4

And this one from Kaspersky Labs explains Hashing in some detail

And finally from the FT

But if you’d like to skip reading the links, here’s a summary of blockchain in lay terms.

The first thing to remember is that blockchain is a network technology. Networks have certain features and properties which make them measurable, distinct and predictable. The way information travels on the Internet, and the way peer to peer streaming works also work off network capabilities. And just like peer-to-peer was created by Napster for music streaming, blockchain was created by the founders of Bitcoin. Both technologies have a life far beyond these initial cases. Peer-to-peer is used today for money transfers, loans and many other scenarios. Similarly, blockchain is being used for dozens of new and interesting use cases – from land registry to asset management.

How does it work? When a new transaction enters the network, it is added to an existing set of transactions to form a block, which is a predefined number of transactions. Let’s say a set of 5 transactions forms a block. This block has a lot of data including numbers, strings and connections. This set of instructions is then put through a ‘hash’ function which generates a long alphanumeric string. Which looks something like this. “7ae26e64679abd1e66cfe1e9b93a9e85”.

At this point, let’s do a quick exercise. If I give you a simple problem, say (3×4)+(5×6), you will quickly be able to work out the answer – which is 42. Note that we could get to 42 also by adding up all the numbers from 3 to 9, or multiplying 2,3 and 7. In fact the number of ways we can get to 42 (or any answer) is theoretically infinite. Now if I go all Douglas Adams on you and reverse the question – if the answer is 42, what is the question? You would have no way of logically ascertaining the original numbers. You would have to resort to guessing. This is what blockchain mining is about. Blockchain miners have the fiendishly difficult task of guessing a hash which they do by generating millions of options till some node on the network stumbles onto the right answer. The hash function can be tweaked to be harder or easier, which, in a network can define the time it takes to solve one block.

Once the ‘answer’ has been found (remember, verifying is easy, guessing is hard, just like our 42 problem above), a block is committed to the registry, which means that all the nodes in the network now will add this block to their registry of transactions. So the information lives not in a single place but in every computer on the network. This makes it exponentially harder to tamper with since you would have to change the data on every computer on the network, else there would be an immediate mismatch. This method of deriving the answer is done using an algorithm called ‘proof of work’ in the Bitcoin blockchain. As you can see it’s computationally very inefficient – which is one of the criticisms of blockchain. The Etherium network which is another blockchain network is proposing to switch to a different, more efficient algorithm called proof of stake, for this reason. In fact one of the criticisms of blockchain and bitcoin is the amount of energy and computation it uses. A switch to proof of stake will solve this problem.

So we understand the block, but what about the chain? Well, every time a new block is added, it’s added to the history of all previous blocks. And the header hash of of each block goes into the body of the next block and forms a part of the ‘hash’ of the next block. So now if you went back and changed a block, to keep it consistent, you would have to change the previous block and by extension the one before that and the one before that all the way to the first transaction. So not only are the transactions on every computer, they are also linked in a chain through to the first transaction.

This is why the blockchain is considered to be so superior – it relies on the network rather than an individual. And tampering with the transaction is fiendishly difficult because there is no single point of control, it’s near impossible to predict which computer will solve the hash problem, so you can’t hack the transaction itself.

To better understand the science of networks, read Albert-Laszlo Barabasi’s book Linked. And to understand the power and significance of networks, Niall Ferguson’s The Square and the Tower is a great starting point.

A Note on Ethereum

Ethereum is a “decentralised platform that runs smart contracts”, using a custom built blockchain, and accessible by developers across the world to build transaction applications on. It is built and run by, a Swiss not-for-profit organisation. Note – Ethereum is a blockchain platform, and while it has its own cryptocurrency (Ether) it also allows developers and 3rd parties to create its own cryptocurrencies. You can use Ethereum to create a crowdfunding exercise with your own cryptocurrency, to build and sell an idea, platform or products. Needless to say, this opportunity has been seized by the ‘ICO’ market. More on that later. According to Ethereum, you can also build democratic autonomous organisations or decentralised applications.

Ethereum is the most mature blockchain platform available to developers and organisations across the world. For example, my colleagues have built a working prototype for a smart contract to enable electric vehicles and homeowners to create a market for charging EVs, with the management of contracts and settlement done intelligently in the background. Other use cases include clearing and settlement for banks, financial services firms, and many others.

Let’s talk about Bitcoin

If you’d like to get your head around bitcoin, I would suggest you read the book “History of Money” by Jack Weatherford. Among other things, it traces the evolution Fiat money which doesn’t have any intrinsic value, but is supported by a government decree, or a ‘promise to pay’. So a lot of currency today is already a result of what Yuval Harari calls ‘Intersubjective realities’ – i.e. something that has a meaning only because we collectively agree to the meaning – such as national borders, or the value of paper money. In this sense Bitcoin is just another level of abstraction – you also abstract away the role of a central bank or monetary authority, in favour of a collective, systemic governance, and a pre-fixed money supply (21 million).

Understanding Bitcoin

Many years ago, deep inside the bowels of the hype machine that was Silicon Valley in the late 90s, a few well-known entrepreneurs put together a spoof company. The only product of this make-believe company was its own stock. And the sales pitch went thus: the more you buy our product, the more valuable it gets. So please keep buying. The echoes of that satire have certainly been seen in the bitcoin mania (and by extension, the cryptocurrency craze) that is sweeping the world. The price of bitcoin is bungee jumping on a daily basis, confounding investors, economists and bankers alike.

bitcoin prices


At its core, there are 3 sources of confusion with Bitcoin: (1) is it a currency? (2) is it a stock and (3) how to value it? Let’s look at them one at a time.

Bitcoin as currency

Bitcoin is notionally a currency, but it fails a few key features of currencies. First, is not universally accepted at stores without workarounds, and by most creditors (you can’t pay your mortgage with bitcoin). Second, the ‘money supply’ while fixed, is not subject to any visible or discernible monetary policy. And finally, can it be taken seriously when it fluctuates as wildly as it has been? See chart 6 in this link. Stability is one of the key requirements for a currency. You don’t want to go to the market not knowing whether the money in your pocket will be enough for your monthly shopping or just a loaf of bread.

Sometime in the 1980s, the buses in the city of Kolkata printed coupons to solve the problem of change, on buses. Instead of giving coins and change back for tickets, they would give you printed coupons which could be used in lieu of coins on your next bus trips. Commuters accepted these with the odd grumble but got quite used to them. Then cornershops and other vendors started accepting them too and pretty soon, there was a parallel currency system flourishing. The government stepped in and banned the use of these coupons because the volume of these transactions had become significant, and it was creating a system of transactions which could neither be monitored nor controlled. After all, there was nothing stopping somebody from printing a bunch of fake coupons and using them at unsuspecting stores.

With any traditional currency, all the clearing is done by the banking system, for all ‘non-cash’ transactions such as checks, electronic transfers, etc. But no bank is involved in clearing bitcoin transactions. So in this aspect, it resembles cash as an extra-banking way of money, similar to the bus tokens of Kolkata.

The history of currency and payments is a story of layered abstraction. From barter systems to silver and gold coins, through to promissory notes and paper money, and ultimately through ledgers and information. (The book “History of Money” is a fascinating read, by the way). In a sense, cryptocurrencies such as Bitcoin are just the next step in this abstraction stack. What if we replaced ‘government’ with an abstract algorithm to control the amount of money and implement ‘monetary policy’. The problem is that with governments, we know or can ascertain the underlying objectives for the economy and for citizens, whereas with privately controlled cryptocurrency, the motives are opaque. We should, in fact, assume that a private enterprise wants to maximise profits, so in a sense we are playing the game of enabling somebody else’s profitability buy participating in a private cryptocurrency system. This is not by itself bad if the underlying decision making is transparent. After all we willingly participate in ecosystems governed by Uber, Google, or Amazon. But the complete lack of transparency for Bitcoin, is a real challenge.

Imagine what would happen if we all agreed to use black pebbles as currency. If we could magically all agree to value them at (say) £1 each. We would all go out and start gathering black pebbles from beaches, quarries, and wherever we could find them for all we were worth. But of course if we could find black pebbles for a cost that to us was less than £1, we would keep collecting them, and the supply of pebbles as currency would keep going up. If we wanted to buy something worth £10, it would be the effort of collecting 10 black pebbles. Perhaps the pebbles would start trading at a discount if they were really easy to get and people would start trading them in for other coins if they felt that the price might fall, thereby triggering, a sell off. Conversely, if black pebbles turned out to be in short supply, the price would rise to higher than £1. In this world, the value of the currency is connected with its supply and cost. With fiat money though, we have disconnected the cost of the currency from its value.

In the global economy of the 20th century and beyond, money has had to balance increasingly complex requirements of balance of payments, exchange rates and interest rates, acting often as a mirror of the goods and services being traded. There is no interest rate for Bitcoin, there are no balance of payments, and the currency value is driven primarily by speculative activity.

At a very practical level at present one transaction takes on average 10 minutes to conclude, which by itself disqualifies it from everyday purchases. In extreme cases one confirmation has taken up to 16 hours. You don’t want to be waiting with your bitcoin wallet at your coffee shop or your tube station waiting for your transaction to be authorised!

Bitcoin as an Investment Vehicle’

Of course bitcoin isn’t a stock, it’s not listed as a stock on any exchange. Yet, there are Bitcoin futures which have been launched by a number of investment banks, and fundamentally, the behaviours of bitcoin punters are similar to speculating on a stock. One that is fuelled by market rumours and short term spikes, but lacks any kind of underlying economic activity.

Any stock is valued on the basis of future earnings which pay out as dividends. As such Bitcoin doesn’t qualify. There is no interest and no dividends. So there is no future stream of income.

My co-panelist at a recent event pointed out, major banks are looking to set up bitcoin trading desks. Although for Goldman Sachsthis seems to have been an inadvertent step. But even if banks start trading in bitcoin, all it means is that that Bitcoin is similar to other arcane financial instruments and the average punters are likely to burn their fingers given that trading is a zero-sum game.

What Is The Value of Bitcoin?

The economist Robert Shiller says “Real understanding of the economic issues underlying the cryptocurrency is almost nonexistent”, and when a Nobel Prize winning economist can’t figure out the value, calls it ‘exceptionally ambiguous’,  and has to invoke ‘animal spirits’, what chance have the rest of us got?

One of the ways to value any asset is to look at the value of its underlying economic activity – for example, the activity of a firm. Clearly, that is not applicable here. There is no income stream – just pure speculative activity. The attractiveness of Bitcoin is its non-traceability and its popularity stems in no small part from its acceptance and use on the more nefarious parts of the internet – the Silk Road, and for contraband substances, for example, on the darknet.

Yuval Harari talks about our inter-subjective realities – the shared fiction that allows us to operate with conceptual constructs such as countries and currency. In this light, as long as people value bitcoin it has value. It’s a classic self-fulfilling prophecy.

Some people like to compare Bitcoin to gold, as a store of value. After all, they say, Gold is also only notionally valuable – if we stopped desiring it, it would lose value. But gold has specific metallurgical properties – it coruscates and is a malleable material which allows it to be turned into fine jewellery, and it has a history of demand dating back to the start of human history.

Is there a social value to Bitcoin? This is a far more interesting question. Going back to the beginning of this discussion, we said that blockchain is a decentralised and network-based technology. It eliminates the need for central banks and central authorities. In this sense, Bitcoin and other cryptocurrencies can be quite subversive and potentially act as disruptive agents in the face of repressive regimes, governments and act as an extra-national standard of transactions. On the other hand, as a currency that lacks any transparency of monetary policy, it remains a huge risk. The entire premise of bitcoin value is based on the principle of a finite supply. But there are scenarios where the Bitcoin community could fork and create more coins. And what happens if the faceless Satoshi Nakamoto sells his estimated 1 million coins?

In a lot of discussions around Bitcoin and blockchain, there is a tendency and a danger of mixing up the two faces of Bitcoin – as a store of value or an investment vehicle, albeit of a largely speculative nature, it definitely has a cachet, but as a currency for everyday transactions and for smoothening global transaction flows, it’s a different ask altogether, and one that bitcoin is a long way from delivering.

Bitcoin Hacks and Cybercrime

If you’ve followed so far, one of the questions that must have come to your mind is, if blockchain is so secure, how are there so many bitcoin hacks and heists in recent times?

Just to name a few, Coincheck a cryptocurrency exchange in Japan suffered a $530 million hack– for NEM coins, in 2017.

In 2014, Mt Gox, another Bitcoin exchange suffered a $480m hack and filed for bankruptcy.

In 2016, Bitfinex, a Hong Kong based Bitcoin exchange was hacked for $70m.

Here’s a longer list. There are some differences in the technicalities, but the point is that most thefts and hacks occur when the coins are stored in ‘wallets’ which are ready for spending. The point is, you are not hacking a transaction, which is still secure. You are hacking a store of coins. Typically done through copying a user’s cryptographic key which is used to unlock the wallet & transferring the coins to other pseudonymous addresses. Again, while blockchain can track the chain, the pseudonymity prevents actual tracking down of criminals. Further use of ‘tumblers’ or mixers, ensures that the stolen Bitcoin is mixed with others, creating new strings making it near impossible to track.

If you’re on the other side, the primary suggestion is don’t hold your coins in a hot wallet – i.e. one that is connected to the Internet. A cold wallet, by contrast is not connected to the Net, making it impossible for  hackers to access the coins.

And What about ICOs and other Cryptocurrencies?

Ah, this is where we’re in shark territory. At last count there are almost 1400 cryptocurrencies listed in Wikipedia. The first and obvious thing to say about this is that a currency is a standard of value and with standards, less is more. Imagine walking around with dozens of currencies in your pocket and not knowing which currency will be accepted when. Every transaction would be longer and more complex!

There are those who ponder whether governments could issue cryptocurrencies. While technically feasible, you would have to question the motive. As of today, it’s more expensive to manage, does not reach the entire population, and its adoption, use, value, and acceptance are still unclear. Besides, I don’t know of a government that willfully wants to give up control over its currency. Perhaps one for the future.

And what about ICOs? We have an absurd number of them now. Once again, it feels very much like the dotcom bubble. Then, a lot of Indian techies who had spent much time changing their names from Krishnamachari to Chris to fit into American culture were changing it back when it became fashionable to have Indian CIOs while wooing investors. In much the same vein, nobody seems to want to just raise money nowadays, without also attaching an ICO to it. The ICO or initial coin offering implies that the company will raise money to create its own cryptocurrency and investors will get these newly minted coins. The underlying promise is that the company will create an effective market for this currency, which is the difficult bit. In reality, there is no guarantee that these coins will be any use, but FOMO is driving investors in droves to the ICO market. Only 48% were successful last year but that yielded $5.6bn.

There have been ICOs from a very wide range of providers, including former lingerie tycoons, and online poker platforms. Although 90% of ICOs are expected to eventually crash, there are people who believe that future ICOs will be more tightly connected with the activity of the company, in what they call ICO 2.0.

One of the most eagerly anticipated ICOs in 2018 is from Telegram, the messaging app. On the plus side, having an existing network, user base, and value certainly gives Telegram a better shot and platform for making a success of a cryptocurrency. Telegram is looking to launch a new blockchain, potentially challenging Ethereum’s primacy. If Telegram can follow the path created by WeChat and integrate commerce into messaging, via it’s Gram coins, as it suggests, then we may have a winner. However, you do have to decipher terms like ‘Instant Hypercube Routing’ and ‘Byzantine Fault Tolerant’ protocol. Most importantly, it wants to make a million transactions per second. This is far ahead of the Bitcoin speeds we spoke about earlier, and even orders of magnitude faster than Visa and MasterCard, who collectively do 2000 transactions every second. Be warned, Telegram plans to keep 52% of its cryptocurrency – so the value of the currency will be significantly managed by the owners of Telegram.

In Sum

This has turned out to be a much, much longer post than I intended initially, but I think I can summarise my thoughts as follows:

Blockchain: a potentially massive new technology that can change the world, but still in its early stages of development and fine-tuning.

Bitcoin: a great option for speculative investment, but definitely not useful yet, as an alternative currency.

Cryptocurrencies: a minute fraction of them will be useful, finding the right one may be a matter of luck. But in 20 years we could all be using a extra-national cryptocurrency as legal tender.

ICOs – definitely a trap for FOMO investors looking to somehow get into the cryptocurrency game. Most will go nowhere.


When Technology Talks

Conversational Systems aka chatbots are starting to become mainstream – here’s why you should stay ahead of the game:


The shape-shifting of the yin-yang between humans and technology is one of the hallmarks of digital technologies, but it is perhaps most pronounced and exploit in the area of Conversational Systems. But to truly appreciate conversational systems, we need to go back a few steps.

For the longest part of the evolution of information technology, the technology has been the unwieldy and intransigent partner requiring humans to contort in order to fit. Mainframe and ERP system were largely built to defend the single version of truth and cared little for the experience. Cue hours of training, anti-intuitive interfaces, clunky experiences, and flows designed by analysts, not designers. Most of us have lived through many ages of this type of IT will have experienced this first hand. If these systems were buildings they would be warehouses and fortresses, not homes or palaces. Too bad if you didn’t like it. What’s ‘like’ got to do with it! (As Tina Turner might have sung!)

Digital technology started to change this model. Because of its roots in consumer technology rather than enterprise, design and adoption were very much the problem of the providers. This story weaves it’s way through the emergence of web, social media and culminates with the launch of the iPhone. There is no doubt – the iPhone made technology sexy. To extend the oft-quoted NASA analogy, it was the rocket in your pocket! With the emergence of the app environment and broadband internet, which was key to Web 2.0, it suddenly introduced a whole new ingredient into the technology cookbook – emotion! Steve Jobs didn’t just want technology to be likable, he wanted it to be lickable.

The balance between humans and technology has since been redressed significantly – apps and websites focus on intuitiveness, and molding the process around the user. It means that to deal with a bank, you don’t have to follow the banks’ convenience, for time and place, and follow their processes of filling a lifetime’s worth of forms. Instead, banks work hard to make it work for you. And you want it 24/7, on the train, at bus stops, in the elevator and before you get out from under your blanket in the morning. And the banks have to make that happen. The mouse has given way to the finger. Humans and technology are ever closer. This was almost a meeting of equals.

But now the pendulum is swinging the other way. Technology wants to make it even easier for humans. Why should you learn to use an iPhone or figure out how to install and manage an app? You should just ask for it the way you would, in any other situation, and technology should do your bidding. Instead of downloading, installing and launching an app, you should simply ask the question in plain English (or a language of your choice) and the bank should respond. Welcome to the world of Conversational Systems. Ask Siri, ask Alexa, or Cortana, or Google or Bixby. But wait, we’ve gotten ahead of ourselves again.

The starting point for conversational systems is a chatbot. And a chatbot is an intelligent tool. Yes, we’re talking about AI and machine learning. Conversational systems are one of the early and universal applications of artificial intelligence. But it’s not so simple as just calling it AI. There are actually multiple points of intelligence in a conversational system. How does a chatbot work? Well for a user, you just type as though you were chatting with a human and you get human-like responses back in spoken language. Your experience is no different from talking on WhatsApp or Facebook Messenger for example, with another person. The point here is that you are able to ‘speak’ in a way that you are used to and the technology bend itself around you – your words, expressions, context, dialect, questions and even your mistakes.

Let’s look at that in a little more detail. This picture from Gartner does an excellent job of describing what goes into a chatbot:

The user interface is supported by a language processing and response generation engine. This means that the system needs to understand the users’ language. And it needs to generate responses that linguistically match the language of the user, and often the be cognizant of the mood. There are language engines like Microsoft’s LUIS, or Google’s language processing tool.

Behind this, the system needs to understand the user’s intent. Is this person trying to pay a bill? Change a password? Make a complaint? Ask a question? And to be able to qualify the question or issue, understand the urgency, etc. The third key area of intelligence is the contextual awareness. A customer talking to an insurance company in a flood-hit area has a fundamentally different context from a new prospect, though they may be asking the same question ‘does this policy cover xxx’. And of course, the context needs to be maintained through the conversation. An area which Amazon Alexa is just about fixing now. So when you say ‘Alexa who was the last president of the US’ and Alexa says ‘Barack Obama’ and you say ‘how tall is he?’ – Alexa doesn’t understand who ‘he’ is, because it hasn’t retained the context of the conversation.

And finally, the system needs to connect to a load of other systems to extract or enter data. And needless to say, when something goes wrong, it needs to ‘fail gracefully’: such as “Hmm… I don’t seem to know the answer to that. Let me check…” rather than “incorrect command” or “error, file not found”. These components are the building blocks of any conversational system. Just as with any AI application, we also need the data to train the chatbot, or allow it to learn ‘on the job’. One of the challenges in the latter approach is that the chatbot is prone to the biases of the data and real-time data may well have biases, as Microsoft discovered, with a Twitter-based chatbot.

We believe that chatbots are individually modular and very narrow in scope. You need to think of a network of chatbots, each doing a very small and focused task. One chatbot may just focus on verifying the customer’s information and authenticating her. Another may just do password changes. Although as far as the user is concerned, they may not know they’re communicating with many bots. The network of bots, therefore, acts as a single entity. We can even have humans and bots working in the same network with customers moving seamlessly between bots and human interactions depending on the state of the conversation. In fact, triaging the initial conversation and deciding whether a human or a bot needs to address the issue is also something a bot can be trained to do. My colleagues have built demos for bots which can walk a utility customer through a meter reading submission, for example, and also generate a bill for the customer.

Bots are by themselves individual micro-apps which are trained to perform certain tasks. You can have a meeting room bot which just helps you find and book the best available meeting room for your next meeting. Or a personal assistant bot that just manages your calendar, such as We are building a number of these for our clients. Bots are excellent at handling multi-modal complexity – for example when the source of complexity is that there are many sources of information. The most classic case is 5 people trying to figure out the best time to meet, based on their calendars. As you well know, this is a repetitive, cyclical, time-consuming and often frustrating exercise, with dozens of emails and messages being exchanged. This is the kind of thing a bot can do very well, i.e. identify (say) the 3 best slots that fit everybody’s criteria on their calendars, keeping in mind travel and distances. Chatbots are just a special kind of bot that can also accept commands, and generate responses in natural language. Another kind of bot is a mailbot which can read an inbound email, contextualise it, and generate a response while capturing the relevant information in a data store. In our labs we have examples of mailbots which can respond to customers looking to change their address, for example.

Coming back to chatbots, if you also add a voice i.e. a speech to text engine to the interface, you get an Alexa or Siri kind of experience. Note that now we’re adding yet more intelligence that needs to recognise spoken words, often against background noises, and with a range of accents (yes, including Scottish ones). Of course, when it’s on the phone, there are many additional cues to the context of the user. The golden mean is in the space between recognising context and making appropriate suggestions, without making the user feel that their privacy is being compromised. Quite apart from the intelligence, one of the real benefits for users is often the design of the guided interface that allows a user to be walked step by step through what might be a daunting set of instructions or forms or a complex transaction – such as an insurance claim or a mortgage quote.

Gartner suggest that organisations will spend more on conversational systems in the next 3 years than they do on mobile applications. This would suggest a shift to a ‘conversation first’ interface model. There are already some excellent examples of early movers here. Babylon offers a conversational interface for providing initial medical inputs and is approved by the NHS. Quartz delivers news using a conversational model. You can also build conversational applications on Facebook to connect with customers and users. Chatbots are also being used to target online human trafficking. Needless to say, all those clunky corporate systems could well do with more conversational interfaces. Imagine just typing in “TravelBot – I need a ticket to Glasgow on Friday the 9th of February. Get me the first flight out from Heathrow and the last flight back to either Heathrow or Gatwick. The project code is 100153.” And sit back while the bot pulls up options for you, and also asks you whether you need to book conveyance.

Conversational systems will certainly make technology friendlier. It will humanise them in ways we have never experienced before. I often find myself saying please and thank you to Alexa and we will increasingly anthropomorphise technology via the nicknames we give these assistants. You may already have seen the movie “Her”. We should expect that this will bring many new great ideas, brilliant solutions and equally pose new social and psychological questions. Consider for example the chatbot that is desi§gned just for conversation – somebody to talk to when we need it. We often talk about how AI may take over the world and destroy us. But what if AI just wants to be our best friend?

My thanks to my colleagues and all the discussions which have sharpened my thinking about this – especially Anantha Sekar – who is my go-to person for all things Chatbots.

My book: Doing Digital – Connect, Quantify, Optimise – is available here, for the price of a coffee!

As with all my posts, all opinions here are my own – and not reflective of or necessarily shared by my employers.

CityMapper Does Connect, Quantify, Optimise

I know hundreds of people who know and love the Citymapper app, but they did something recently which really impressed me. As you know the app uses a number of public data streams to help you navigate your city – London being a good example. So you just have to say ‘get me to work’ or ‘get me home’ or any other destination and it tells you the best ways across buses, trains, walking, cycling, or driving. It also helpfully offers an Uber connection and for good measure includes a futuristic option such as ‘catapult’ or ‘teleportation’ to appeal to your quirky side.

They work across about 40 cities across 4 continents currently and base future cities expansion on a popular vote. Needless to say, they collect a ton of data about where people are travelling to and from. But the really interesting thing is what they do with all the data they collect.

In my recently published book Doing Digital, I proposed the model of Connect/ Quantify/ Optimise for digital. The model suggests these 3 stages for digital. Designing something that is easy and frictionless to use, allows you to get to Connect. Having thousands, or even millions of people use your app gives you the data which allows you to Quantify – for example, Citymapper can see where it’s most commonly visited areas are, where they have or lack coverage and market their app accordingly. They can build revenue models with Uber which allows commercialise the traffic they send to Uber. But the last step is where the magic often is – this is where you start to see new value and tweak your business or commercial model based on the opportunity that the Connect & Quantify stages throws up.

In the case of Citymapper, this is a bus service. According to CityMapper they can see based on their patterns, which areas and routes are underserved by public transport. Using this knowledge, they have launched a bus service in coordination with Transport for London to launch a small, green bus which runs on a fixed route. It’s called CMX1 and it’s a ‘pop-up route’ which presumably means that they will validate the route based on the data it generates over a trial period. What is even more fascinating is to consider some of the underlying assumptions that the CityMapper model is challenging. One of them being that bus routes are cast in stone and have to be long term commitments. But what if these routes could be intelligently introduced in response to shorter term needs and changes? The team are even trying to improve the experience of the bus journey by redesigning the bus outside and inside.

Photo 22-05-2017, 12 17 35 (1)
I love that their blog exhorts customers to ‘come and watch an app company fumble around with learning how to run a service with real vehicles and drivers’. This ticks the box of building learning organisations in the classic Eric Ries model. It is also an excellent example of ‘Connect / Quantify/ Optimise’. And I fully expect CityMapper to be thinking about Autonomous Vehicles in their R&D room – as they will probably be in a position to unleash an autonomous fleet in a few years based on their accumulated lessons from this exercise. Yet another Connect-Quantify-Optimise cycle at work.

Service Design Drives ‘Affordable Luxury’ Business Models

handcrafted shoes
One of the manifestations of digital business models built around good service design is the burgeoning of affordable luxury, which carves an entirely new aspirational category of of the sizeable middle class market.
But to illustrate, let me tell you a story, based on my experience of last week. I always have to buy trousers and get them altered because I don’t fit the shape that they come in, off the shelf. Or, as Garfield the cartoon cat once said, “I’m not overweight, I’m undertall”. So I was pleasantly surprised to find a high street retailer who offer an alteration service for their chinos (this is not common in London, btw). I bought a pair and took it to the counter to ask if they would measure and alter it for me. They said I would need to measure it myself and fold it to the point where I wanted the length reduced. Now, I don’t know if you’ve ever tried measuring your own trouser length. It’s about as easy as painting a smiley in the middle of your own back. So I said I’d take it home, measure it and bring it back. Next morning I was back with the trousers duly folded. Stood in the check out line and 10 mins later, I was told I needed to put a pin in to keep the fold. I asked for a pin, but of course, they didn’t have one. It took me another 10 mins of queuing at their alternation desk on another floor, and then a final wait in the original queue.  If you’re like me, at these moments you feel the life force seeping out of you.
For people who are time poor, which is most of us in most cities across the world, this ability to value the customers time is such a critical aspect of any service, that I’m always amazed when people don’t get it. In this case my joy at the finding the alteration service has definitely been tempered by the half an hour of my time I lost in the process. And based on my simple and one-off experience, you can immediately see how service design could be used to improve this dramatically – i.e. if somebody thought through the experience end to end, for the customer. Upfront information about the service terms is a simple idea. Just below the in-store poster announcing the service should be a simple list of what the shopper needs to do to use this wonderful service. Expectation setting often makes all the difference. Having a tape measure with a small weight that can be used like a plumb line in front of the mirror, to get an accurate length is another simple idea. These should be in the dressing room. On the basis that I might want to come back for more, why not let me store my measurements in the store app (they don’t). In my perfect world, I could sit at my desk at work at the end of day and order another couple of pairs, based on the new colours available, and they would have trousers ready in the store across from my work at a time that they could commit. The world is full of people like me who will repeat buy clothes from brands they trust and have had a good experience with. This is fundamentally the difference between a more traditional view of the business and an outside in view – driven by service design which puts the consumer in the centre and tries to remove all the friction in the entire buying cycle.
There are parts of the world, such as in most parts of India where this is an easy and people driven process. You buy a pair of trousers, and an in-house tailor takes your measurement and give you a time for delivery. This kind of people driven process, and infect the very idea of customisation is a luxury in the western world – especially when it comes to high street apparel brands. People are expensive. Factory made clothes cost less than half of tailor made ones. Thanks to improved stock management and product design, you can now get more options within the clothes you wear – a longer sleeve, a different collar, a slimmer cut, etc. But economics demands that any customisation at the point of delivery remains outside the purview of most products. Yet, digital models can significantly lower the bar for the accessibility of a luxury service. In my example of alteration – you can see how the app enablement and ordering based on my specific measurements could even be done in a centralised way and delivered to a store. I can live with a lead time of a week – as long as it’s a reliable one. At the core of this is the ability to take the customisation information off the universe of consumers and deliver the customisation at a much lower cost, at higher scale.
When you walk into your regular coffee shop, you don’t have to tell them each time that you want 2 shots of coffee, half a cup of foamed milk, with semi-skimmed extra hot milk (or as Niles Crane would say, “Double Cappuccino – half-caf, non-fat milk, with just enough foam to be aesthetically pleasing but not so much that it leaves a moustache”.) Instead, you can just say ‘the usual’. Starbucks can also increasingly do that via the app – because no matter which Starbucks you go to, if you order through the app, you can just do it with one click. And Starbucks can even analyse your choices, behaviours, and make suggestions for you.  Industrialisation in all its forms has historically created scale but lost customisation. Digitisation is allowing us to layer the customisation back over the industrial scale. This is why it’s so critical for consumer facing business to embrace this combination of service design and digital customisation.
Starbucks coffee options
We subscribe at home to a brand called Hello Fresh – they are one amongst a few who deliver ready to cook dinners. Each dinner is a dish that you’ve chosen from a menu via the site. It comes with a recipe and all ingredients pre measured and packed individually. If your tiger prawn recipe requires echalion shallot or samphire – don’t worry if you don’t have them in your fridge (let alone if like me you have to google them to learn what they are), they come in the box, in the right amounts. This too is like a luxury service but thanks to the underlying business model and the digital enablement of the ordering, menu and selection process, it can be delivered to a larger non-luxury audience.
If you look around there are dozens of places where this kind of customisation, once outside the purview of industrial models, is now back in vogue thanks to digital tools. Personal financial advisors, customised movie recommendations, configurable holidays, customised trainers – and many more. Remember though, this is not an efficiency play. It’s not enough to build a generic digital front end that will drive this mass customisation. It needs a commitment to service design to see the whole experience through the eyes of the consumer and to understand where her challenges, points of confusion, discomfort or dissatisfaction are and build the flexible digital model to address these.

So You Think The Brain is Better Than The Computer?


Every discussion on the power of computers is bracketed by the comparison to the human brain and the dwarfing of any known computer by the fantastical power of the human brain. Estimates by Ray Kurzweil suggested a calculations per second (cps) capability of 10 16 or 10 quadrillion cps. And it runs on 20 watts of ‘power’. By comparison (according to this excellent article that everybody should read) the worlds best computer today can do 34 quadrillion cps but it occupies 720 sq meters of space, costs $390m to build and requires 24 megawatts of power.

Besides, that’s just the ‘hardware’ so to say. The brain’s sophistication is far, far ahead of the computers, considering all the miraculous things it can do. We know now that the biggest evolution of the human brain was the growth of the prefrontal cortex, which required a rethink of the interior design of the skull. Also, a key facet of the brain is that it is a neural network – capable of massively parallel processing – simultaneously collecting and processing huge amounts of disparate data. I’m tapping away on a laptop savouring the smell and taste of coffee while listening to music on a cold cloudy day in a warm cafe surrounded by art. The brain is simultaneously assimilating the olfactory, visual, aural, haptic and environmental signals, without missing a beat.

It would appear therefore that we are decades away from computers which can replace brain functions and therefore, jobs. Let’s look at this a little more closely though.

The same article by Tim Urban shows in great detail how the exponential trajectory of computers and software will probably lead to affordable computers with the capacity of a human brain arriving by 2025, and more scarily, achieving the computing capacity of all humans put together by 2040. This is made possible by any number of individual developments and the collective effort of the computer science and software industry. Kevin Kelly points to 3 key accelerators, apart from the well known Moore’s law. The evolution of graphics chips which are capable of parallel processing – leading to the low cost creation of neural networks; the growth of big data, which allows these ever more capable computers to be trained; and the development of deep learning – the layered and algorithmically driven learning process which brings much efficiency to how machines learn.

So the hubris around the human brain may actually survive another decade at best and thereafter the question might not be whether computers can be as good as humans but how much better than the human brain could the computer be. But that has been well argued and no doubt will be so again, including the moral, ethical and societal challenges it will bring.

I actually want to look at the present and sound a note of warning to all those still in the camp of ‘human brain hubris’. Let me start with another compliment to the brain. Consider this discussion between two friends meeting after ages.

A: how have you been? What are you doing nowadays?

B: I’m great, I’ve been playing chess with myself for ages now.

A: Oh? How’s that? Sounds a bit boring.

B: Oh no, it’s great fun, I cheat all the time.

A: But don’t you catch yourself?

B: Nah, I’m too clever.

One of the most amazing thing about the brain is how it’s wired to constructively fool us all the time. We only ‘think’ we’re seeing the things we are. In effect, the brain is continuously short circuiting our complex processing and presenting simple answers. This is brilliantly covered by Kahneman, and many others. Because, if we had to process every single bit of information we encounter, we would never get through the day. The brain allows us to focus by filtering out complexity through a series of tricks. Peripheral vision, selective memory, and many other sophisticated tricks are at play every minute to allow to function normally. If you think about it, this is probably the brains’ greatest trick – in building and maintaining this elaborate hoax that keeps up the fine balance between normalcy and what we would call insanity. Thereby allowing us to focus sharply on specific information that needs a much higher level of active processing.

And yet, put millions of all of these wonderful brains together, and you get Donald Trump as president. You get Brexit, wars, environmental catastrophy, stupidity at an industrial scale, and a human history so chockfull of bad decisions that you wonder how we ever got to here. (And if you’re pro Trump then consider that even more people with the same incredible brain voted for Clinton). You only have to speak with half a dozen employees of large companies to collect a legion of stories about mismanagement and how the intelligence of organisations is often considerably less than the sum of the parts. I think it would be fair to say that we haven’t yet mastered the ability to put our brains together in any kind of reliably repeatable and synergistic way. Very much in trial and error mode here.

This is one of the killer reasons why computers are soon going to better than humans. In recent years, computers have been designed to network, to share, pool and exchange brain power. We moved from the original mainframe (one giant brain), to PCs (many small brains), to a truly cloud based and networked era (many, connected brains working collectively, much, much bigger than any one brain). One of the most obvious examples is blockchain. Another is in the example of the driverless car. Now, most of you might agree that as of today you would rather trust a human – (perhaps yourself) rather than a computer at the wheel of your car. And you may be right to do so. But here are two things to ponder. Your children will have to learn to drive all over again, from scratch. You might be able to give them some guidance, but realistically may be 1% of your accumulated expertise behind the wheel will transfer, from your thousands of driving hours. Also, let’s assume you hit an oil slick on the road and almost skid out of control. You may, from this experience, learn to recognise oil slicks, deal with them better, perhaps learn to avoid them or slow down. Unfortunately, only one brain will benefit from this – yours. Every single person must learn this by experience. When a driverless car has a crash today because it mistakes a sky blue truck for the sky, it also learns to make that distinction (or is made to). But importantly, this ‘upgrade’ goes to every single car using the same system or brain. So you are now the beneficiary of the accumulated learning of every car on the road, that shares this common brain.

Kevin Kelly talks about a number of different kinds of minds / brains that might ensue in the future, that are different from our own. But you can see a very visual example of this in the movie – Live Die Repeat – where the protagonists must take on an alien that lives through it’s superbrain – which is all seeing. It gets better. If, like the airline industry, automotive companies agree to share this information – following every accident or near-miss, then you start to get the benefit of every car on the road, irrespective. Can you imagine how quickly your driverless car would start to learn? Nothing we currently know or can relate to prepares us for this exponential model of learning and improvement.

It’s not just the collective, though. The super-computer that is the brain, fails us in a number of ways. Remember that the wondrous brain is fantastic as the basic hardware and wiring, and possibly, if you will allow me to extend the analogy, the operating system. Thereafter, it is the quality of your learning, upkeep and performance management that takes over, and this where we as humans start to stumble. Here are half a dozen ways in which we already lag behind computers:

Computation: This is the first and the most obvious. Our computational abilities are already infinitesimally small compared to the average computer. This should require no great elaboration. But when you apply it to say, calculating the speed of braking to ensure you stop before you hit the car that’s just popped out in front, but not so fast that you risk being hit by the car behind you, you’re already no match for the computer. Jobs that computers have taken over on the basis of computation include programmatic advertisement buying, and algorithmic trading. Another type of computation involves pattern recognition – for example checking scans for known problems, as doctors do.

Observation: Would you know if the grip on your tyres has dropped by 10%? 5%? What if your engine is performing sub optimally, or if your brakes are 3% more loose than normal? Have you ever missed a speed limit sign as you come off a freeway or motorway? Have you ever realised with a fright that there was something in your blind spot? This is a particularly obvious observation as well. A computer, armed with sensors all around the car is much less likely to miss an environmental or vehicular data point than you are. With smarter environments, you may not need speed limit signs for automated cars. All this is before we factor in distractions, or less than perfect eyesight and hearing, and just unobservant driving. Other observation based professions include security and flight navigation, where computers are already at work.

Reaction time: any driving instructor will tell you that the average reaction time is a tenth of a second for humans. In other words, at 40 mph, you will have covered 17 meters before your brain and body starts to react. By the time you’ve actually slammed the brakes or managed to swerved the car – you may well be 20-25 meters down. By contrast there is already evidence of autonomous vehicles being able to pre-empt a hazard and slow down. Even more so if the crash involves another car using the same shared ‘brain’. There is a lot of thought being given currently to the reaction time of a human take over if the autonomous system fails. This is of course a transient phase, until the reliability of the autonomous system reaches a point where this will only be a theoretical discussion.

Judgement: the problem with our brilliant brains is that we rarely allow them to work to their potential. In the US, in 2015, 35,000 people were killed in traffic accidents. Almost 3500 crashes were caused by distracted driving. Or where the driver is cognitively disengaged. There are an endless number of reasons for why we’re not paying attention when we’re driving. Tiredness, stress, anger, conversing with somebody, or worse, alcohol or being distracted by our phones. There have been studies that show that judges decisions tend to be more harsh as judges get hungry. Great though our brains are, they are also very delicate – and easily influenced. Our emotional state dramatically impacts our judgement. And yet, we often use judgement as a way of bypassing complex data processing. Invaluable where the data doesn’t exist. But with the increasing quantification of the world, we may need less judgement and simply more processing. Such as ‘Hawk Eye’ in tennis and ‘DRS’ in cricket.

Training: how long did it take you to learn to drive? A week? A month? Three? How long did it take you to be a good driver? Six months? Going back to my earlier comments – this needs to be repeated each time for each person. So the collective cost is huge. Computers can be trained much faster and do not need the experiential component one computer at a time. So in any job where you have to replace people, a computer will cut out your training time. This can include front desk operations, call centres, retail assistants, and many more. The time to train an engine such as IBM Watson has already gone from years to weeks.

So while we should agree that the human brain is marvellous for all it can do, it’s important to recognise it’s many limitations. Let’s also remember that the human brain has had an evolutionary head-start of some 6 million years. And the fact that we’re having this discussion suggests that computers have reached some approximation of parity in about 60 odd years. So we shouldn’t be under any illusions about how this will play out going forward. But I wrote this piece to point the out that even as of today, there are so many parameters along which brain already lags behind its silicon and wire based equivalent. A last cautionary point – the various cognitive functions of the brain peak at different points of our lives – some as early as in our 20s and some later. But peak they do, and then we’re on our way down!

Fortunately, for most industries, there should be a significant phase of overlap during which computers are actually used to improve our own functioning. Our window of opportunity for the next decade is to become experts at exploiting this help.

Why Are We Suddenly So Bad At Predicting the Future?

Imagine that a monkey got into the control room of the universe and spent the year clicking random buttons. Imagine him hopping about on the ‘one musician less’ button, stomping on the ‘auto-destruct’ lever and gurgling while he thumped repeatedly on the ‘introduce chaos’ switch. Close your eyes and picture him dropping a giant poo on the bright red panel marked ‘do not touch under any circumstances’. That my friends is the only way to think about 2016 – after all, it was the year of the monkey in the Chinese zodiac. It was the year when rational thinking took a beating, when meritocracy became a bad word, when liberalism escaped from the battlefield to a cave in the mountains to lick its wounds. And not surprisingly, a year when projections, predictions and polls made as much sense in the real world as an episode of Game of Thrones on steroids.
Given much of our lives are spent in productively engaging with the future and making decisions based on big and small decisions about the possible future, this last point is more important than just the schadenfreude of laughing at pollsters and would be intellectuals. The present passes too quickly, so really every decision you’ve ever made in your life is counting on future events to turn out in ways that are favourable. Getting this wrong is therefore injurious to health, to put it mildly. And yet our ability to predict the future has never been under such a cloud in living memory. Why is this so?

Fundamentally, we’re wired to think linearly in time, space and even line of sight. We are taught compound interest but we get it intellectually rather than viscerally. When you first encounter the classic rice grains and chessboard problem, as a smart person, you know that it’ll be a big number, but hand on heart, can you say you got the order of magnitude right? i.e. the total amount of rice on the chessboard would be 10x the world’s rice production of 2010? Approximately 461,168,602,000 metric tons? This problem of compounding of effects is incredibly hard to truly appreciate, even before you start to factor in all the myriad issues that will bump the rate of change up or down, or when the curve hits a point of inflexion. The Bill Gates quote  – ‘we over-estimate the impact of technology in 2 years, and under-estimate the impact over 10’ – is a direct reframing of this inability to think in a compound manner.

Then there’s the matter of space and line of sight. The way the future unfolds is dramatically shaped by network effects. The progress of an idea depends on it’s cross fertilisation across fields, geographies and disciplines, across any number of people, networks and collaborations. These collaborations can be engineered to a point or are the result of fortuitous clustering of minds. In his book ‘Linked’ – Ablert-Lazlo Barabasi talks about the mathematician Erdos who spent his life nomadically, travelling from one associates’ home to another discussing mathematics and ironically, network theory. Not surprisingly, a lifestyle also practiced for many years by a young Bob Dylan, if you substitute mathematics for music. Or consider the story of the serial entrepreneur in Rhineland in the 1400s, as told by Steven Johnson, in ‘Where Good Ideas Come From’. Having failed with a business in mirrors, he was working in the wine industry, where the mechanical pressing of grapes had transformed the economics of winemaking. He took the wine press, and married it with a Chinese invention – movable type, to create the worlds first printing press. His name of course, was Johannes Gutenberg. This kind of leap is not easy to predict, not just for the kind of discontinuity they represent (more on that later), but also because of these networked effects. Our education system blinkers us into compartmentalised thinking which stays with us through our lives. Long ago, a student of my mothers once answered a question about the boiling point of water by saying “in Chemistry, it’s a 100 degrees Centigrade, but in Physics, I’m not sure”. We are trained to be specialists, becoming more and more narrow as we progress through our academic career, ending up more or less as stereotypes of our profession. Yet human progress is driven by thousands of these networked, collaborative, and often serendipitous examples. And we live in a world today with ever expanding connections, so it’s not surprising that we have fallen behind significantly in our ability to understand how the network effects play out.

If you want to study the way we typically make predictions, you should look no further than sport. In the UK, football is a year round sport, so there are games every weekend for 9 months and also mid week for half the year. And with gambling being legal, there is an entire industry around football gambling. Yet, the average punter, fan or journalist makes predictions which are at best wilfully lazy. There is an apocryphal story about our two favourite fictitious sardars – Santa Singh and Banta Singh, who decide to fly a plane. Santa, the pilot, asks Banta, the co-pilot to check if the indicators are working. Banta looks out over the wing and says “yes they are, no they aren’t, yes they are, no they aren’t…” – this is how a lot of predictions are made in the world of premier league football today. Any team that loses 3 games is immediately in a ‘crisis’ while a team that wins a couple of games are deemed to be on their way to glory. Alan Hansen, an otherwise insightful pundit and former great player, will always be remembered for his one comment “You can’t win anything with Kids” – which he made after watching a young Manchester United side lose to Aston Villa in the 1995-96 season. Manchester United of course went on to win the season and dominate the league for the next decade and a half. Nobody predicted a Leicester City win in 2016 of course, but win they did. The continuous and vertiginous increase in TV income for football clubs has led to a relatively more equal playing field when it comes to global scouting networks, so a great player can pop up in any team and surprise the league. Yet we find it hard to ignore all the underlying trends and often find ourselves guilty of treating incidents as trends.

The opposite, is amazingly, also true. We are so caught up with trends that we don’t factor in the kinks in the curve. Or to use Steve Jobs’ phrase – the ding in the universe. You can say that an iPhone like device was sure to come along sooner or later. But given the state of the market – with Nokia’s dominance and 40% global market share, you would have bet your house on Nokia producing the next breakthrough device eventually. Nobody saw the iPhone coming, but when it did it created a discontinuous change that rippled across almost every industry over the next decade. The thing is, we like trends. Trends are rational and they form a kind of reassuring continuity so that events can fit our narratives, which in turn reaffirm our world view. And unless we’re close to the event, or perennial change seekers and nomads ourselves, it’s hard to think of countercyclical events. It’s now easy to see how in 2016 we were so caught up in the narrative of progressive liberalisation and unstoppable path to globalisation, we failed to spot those counter-cyclical events and cues that were right there in our path.

In fact there are any number of cognitive biases we are guilty of – on an everyday basis. This article just lists a dozen of them. My favourites in this list are the confirmation bias and the negativity bias. Both of these are exacerbated by social media and digital media. While social media has led us to the echo-chambers – the hallmarks of 2016, our projection bias is also accentuated by our ability to choose any media we want to consume, in the digital world, where access is the easy part. Similarly, bad news spreads faster on social networks and digital media today than at any time before in history. Is it possible that despite knowing and guarding against these biases in the past, we’ve been caught out by the spikes in the impact and incidence of a couple of these, in the digital environment we live in today?
To be fair, not everybody got everything wrong. Plenty of people I know called the Donald Trump victory early in the game. And amongst others, John Batelle got more than his share of predictions right. There is no reason to believe that 2017 will be any less volatile or unpredictable than 2016, but will our ability to deal with that volatility improve? One of the more cynical tricks of the prediction game is to make lots of predictions at many different occasions. People won’t remember all your bad calls, but you can pick out the ones you got right, at leisure! This is your chance, then, to make your predictions for 2017. Be bold, be counter-cyclical. And shout it out! Don’t be demure. The monkey is history, after all. This is the year of the rooster!

2016/2017 Shifting Battlegrounds and Cautious Predictions for Digital

Innovation slows down in mobile devices but ramps up in bio-engineering. Voice goes mainstream as an interface. Smart environments and under the hood network and toolkit evolution continues apace.

For most people I know, 2016 has ranged between weird and disastrous. But how was it for the evolution of the digital market?

The iPhone lifecycle has arguably defined the current hypergrowth phase of the digital market. So it’s probably a good place to start. In the post Steve Jobs world, it was always going to be a question about how innovative and forward thinking Apple would be. So far, the answer is not very. 2016 was an underwhelming world for iPhone hardware (though Apple has tried harder with MacBooks). Meanwhile, Samsung which you suspect has flourished so far by steadfastly aping Apple, ironically finds itself rudderless after the passing of Steve Jobs. It’s initial attempts at leapfrogging Apple have been nothing short of disastrous with the catastrophic performance of the new inflammable Note phones/ batteries. Google’s Pixel Phone could hardly have been timed better. By all initial accounts (I’m yet to see the phone myself) it’s comparable but not superior to an iPhone 7, Google’s wider range of services and software could help it make inroads into the Apple market. Especially given the overwhelming dominance of Android in the global OS market. The market has also opened up for One Plus, Xaomi and others to challenge for market share even in the west. Overall, I expect the innovation battleground to move away from mobile devices in 2017.

While on digital devices, things have been quite on the Internet of things front. There have been no major IOT consumer grade apps which have taken the world by storm. There have been a few smart home products, but no individual app or product stands out for me. As you’ll see from this list – plenty if ‘interesting…’ but not enough ‘wow’. I was personally impressed by the platform capabilities of enabling IOT applications, form companies such as Salesforce, which allow easy stringing together of logic and events to create IOT experiences, using a low code environment.

AR and VR have collectively been in the news a lot, without actually having breakthrough moment. Thanks to the increasing sophistication of VR apps and interfaces, with Google Cardboard and the steady maturing of the space. But the most exciting and emotive part of AR / VR has been the hololens and holoportation concepts from Microsoft – these are potentially game changing applications if they can be provided at mass scale, at an affordable cost point and if they an enable open standards for 3rd parties to build on and integrate.

Wearables have had a quiet-ish year. Google Glass has been on a hiatus. The Apple Watch is very prominent at Apple stores but not ubiquitous yet. It’s key competitor – Pebble – shut shop this year. Fitbits are now commonplace but hardly revolutionary beyond the increasing levels of fitness consciousness in the world today. There are still no amazing smart t-shirts or trainers.

The most interesting digital device of 2016 though, has been the Amazon Echo. First, it’s a whole new category. It isn’t an adaptation or a next generation of an existing product. It’s a standalone device (or a set of them) that can perform a number of tasks. Second, it’s powered almost entirely by voice commands “Alexa, can you play Winter Wonderland by Bob Dylan?”, third, and interestingly it comes from Amazon, for whom this represents a new foray beyond commerce and content. Echo has the potential to become a very powerful platform for apps that power our lives, and voice may well be the interface of the future. I can see a time the voice recognition platform of Echo (or other similar devices) may be used for identity and security, replace phone conversations, or also become a powerful tool for healthcare and providing support for the elderly.

Behind the scenes through there have been plenty of action over the year. AI has been a steady winner in 2016. IBM’s Watson added a feather to it’s cap by creating a movie trailer. But away from the spotlight, it has been working on gene research, making cars safer, and even helping fight cancer. But equally, open source software and the stuff that goes behind the websites and services we use every day have grown in leaps and bounds. Containerisation and Docker may not be everybody’s cup of tea but ask any developer about Docker and watch them go misty eyed. The evolution of micro services architecture and the maturing of APIs are also contributing to the seamless service delivery that we take for granted when we connect disparate services and providers together to order Uber cabs via the Amazon Echo, or use clever service integrators like Zapier

All of this is held together by increasing focus on design thinking which ensures that technology for the sake of tech does not lead us down blind alleys. Design thinking is definitely enjoying its moment in the sun. But I was also impressed by this video by Erika Hall that urges us to go beyond just asking users or observing them, and being additionally driven by a goal and philosophy.

2016 has also seen the fall of a few icons. Marisa Meyers has had a year to forget, at Yahoo. Others who we wanted to succeed but who turned out to have feet of clay, included Elizabeth Holmes at Theranos, and the continued signs of systemic ethical failure at Volkswagen. I further see 2016 as the year when external hard drives will become pointless. As wifi gets better, and cloud services get more reliable, our need to have a local back up will vanish. Especially as most external drives tend to underperform over a 3-5 year period. Of course, 2016 was the year of the echo-chamber – a reminder that social media left to itself insulates us from reality. It was a year when we were our worst enemies. Even through it was the Russians who ‘Hacked’ the US elections and the encryption debate raged on.

One of the most interesting talks I attended this year was as the IIM Alumnus meeting in London, where a senior scientist from GSK talked about their alternative approach to tackling long term conditions. This research initiative is eschewing the traditional ‘chemical’ based approach which works on the basis that the whole body gets exposed to the medication but only the targeted organ responds. This is a ‘blunt instrument’. Instead, the new approach takes an ‘bio-electronic’ approach. Galvani Bioelectronics, set up in partnership with Alphabet will use an electronic approach to target individual nerves and control the impulses they send to the affected organ, say the pancreas, for diabetes patients. This will be done through nanotechnology and by inserting a ‘rice grain’ sized chip via keyhole surgery. A successful administration of this medicine will ensure that the patient no longer has to worry about taking pills on time, or even monitoring the insulin levels, as the nano-device will do both and send results to an external database.

Biotech apart, it was a year when Google continued to reorganise itself around Alphabet. When Twitter found itself with it’s back to the wall. When Apple pondered about life beyond Jobs. Microsoft emerged from it’s ashes, and when Amazon grew ever stronger. As we step into 2017, I find it amazing that there are driverless cars now driving about on the roads, in at least one city, albeit still in testing. That we are on the verge of re-engineering the human body and brain. I have been to any number of awesome conferences and the question that always strikes me is, why aren’t we focusing our best brains and keenest technology on the worlds greatest problems. And I’m hopeful that 2017 will see this come to fruition in ways we can’t even imagine yet.

Here are 5 predictions for 2017. (Or around this time next year, more egg on my face!)

  • Apple needs some magic – where will they find it from? They haven’t set the world alight with the watch or the phone in 2016. The new MacBook Pro has some interesting features, but not world beaters yet. There are rumblings about cars, but it feels like Apple’s innovation now comes from software rather than hardware. I’m not expecting a path breaking new product from Apple but I’m expecting them to become stronger on platforms – including HomeKit, HealthKit and to seeing much more of Apple in the workplace.
  • Microsoft has a potential diamond in LinkedIn, if it can get the platform reorganised to drive more value for its, beyond job searches. Multi-layered network management, publishing sophistication, and tighter integration with the digital workplace is an obvious starting point. Microsoft has a spotted history of acquisitions, but there’s real value here, and I’m hoping Microsoft can get this right. Talking about Microsoft, I expect more excitement around Hololens and VR based communication.
  • I definitely expect more from Amazon and for the industry to collectively start recognising Amazon as an Innovation leader and held in the same esteem as Apple and Google. Although, like Apple, Amazon will at some point need stars beyond Bezos and a succession plan.
  • Healthcare, biotechnology, genetics – I expect this broad area of human-technology to get a lot of focus in 2017 and I’m hoping to see a lot more news and breakthroughs in how we engineer ourselves.
  • As a recent convert, I’m probably guilty of a lot of bias when I pump for voice. Recency effect, self referencing, emotional response over rational – yes all of the above. Voice is definitely going to be a big part of the interface mix going forward. In 2017, I see voice becoming much more central to the interface and apps planning. How long before we can bank via Amazon Echo?

Happy 2017!