Why Are We Suddenly So Bad At Predicting the Future?

Imagine that a monkey got into the control room of the universe and spent the year clicking random buttons. Imagine him hopping about on the ‘one musician less’ button, stomping on the ‘auto-destruct’ lever and gurgling while he thumped repeatedly on the ‘introduce chaos’ switch. Close your eyes and picture him dropping a giant poo on the bright red panel marked ‘do not touch under any circumstances’. That my friends is the only way to think about 2016 – after all, it was the year of the monkey in the Chinese zodiac. It was the year when rational thinking took a beating, when meritocracy became a bad word, when liberalism escaped from the battlefield to a cave in the mountains to lick its wounds. And not surprisingly, a year when projections, predictions and polls made as much sense in the real world as an episode of Game of Thrones on steroids.
monkey-prediction
Given much of our lives are spent in productively engaging with the future and making decisions based on big and small decisions about the possible future, this last point is more important than just the schadenfreude of laughing at pollsters and would be intellectuals. The present passes too quickly, so really every decision you’ve ever made in your life is counting on future events to turn out in ways that are favourable. Getting this wrong is therefore injurious to health, to put it mildly. And yet our ability to predict the future has never been under such a cloud in living memory. Why is this so?

Fundamentally, we’re wired to think linearly in time, space and even line of sight. We are taught compound interest but we get it intellectually rather than viscerally. When you first encounter the classic rice grains and chessboard problem, as a smart person, you know that it’ll be a big number, but hand on heart, can you say you got the order of magnitude right? i.e. the total amount of rice on the chessboard would be 10x the world’s rice production of 2010? Approximately 461,168,602,000 metric tons? This problem of compounding of effects is incredibly hard to truly appreciate, even before you start to factor in all the myriad issues that will bump the rate of change up or down, or when the curve hits a point of inflexion. The Bill Gates quote  – ‘we over-estimate the impact of technology in 2 years, and under-estimate the impact over 10’ – is a direct reframing of this inability to think in a compound manner.

Then there’s the matter of space and line of sight. The way the future unfolds is dramatically shaped by network effects. The progress of an idea depends on it’s cross fertilisation across fields, geographies and disciplines, across any number of people, networks and collaborations. These collaborations can be engineered to a point or are the result of fortuitous clustering of minds. In his book ‘Linked’ – Ablert-Lazlo Barabasi talks about the mathematician Erdos who spent his life nomadically, travelling from one associates’ home to another discussing mathematics and ironically, network theory. Not surprisingly, a lifestyle also practiced for many years by a young Bob Dylan, if you substitute mathematics for music. Or consider the story of the serial entrepreneur in Rhineland in the 1400s, as told by Steven Johnson, in ‘Where Good Ideas Come From’. Having failed with a business in mirrors, he was working in the wine industry, where the mechanical pressing of grapes had transformed the economics of winemaking. He took the wine press, and married it with a Chinese invention – movable type, to create the worlds first printing press. His name of course, was Johannes Gutenberg. This kind of leap is not easy to predict, not just for the kind of discontinuity they represent (more on that later), but also because of these networked effects. Our education system blinkers us into compartmentalised thinking which stays with us through our lives. Long ago, a student of my mothers once answered a question about the boiling point of water by saying “in Chemistry, it’s a 100 degrees Centigrade, but in Physics, I’m not sure”. We are trained to be specialists, becoming more and more narrow as we progress through our academic career, ending up more or less as stereotypes of our profession. Yet human progress is driven by thousands of these networked, collaborative, and often serendipitous examples. And we live in a world today with ever expanding connections, so it’s not surprising that we have fallen behind significantly in our ability to understand how the network effects play out.

If you want to study the way we typically make predictions, you should look no further than sport. In the UK, football is a year round sport, so there are games every weekend for 9 months and also mid week for half the year. And with gambling being legal, there is an entire industry around football gambling. Yet, the average punter, fan or journalist makes predictions which are at best wilfully lazy. There is an apocryphal story about our two favourite fictitious sardars – Santa Singh and Banta Singh, who decide to fly a plane. Santa, the pilot, asks Banta, the co-pilot to check if the indicators are working. Banta looks out over the wing and says “yes they are, no they aren’t, yes they are, no they aren’t…” – this is how a lot of predictions are made in the world of premier league football today. Any team that loses 3 games is immediately in a ‘crisis’ while a team that wins a couple of games are deemed to be on their way to glory. Alan Hansen, an otherwise insightful pundit and former great player, will always be remembered for his one comment “You can’t win anything with Kids” – which he made after watching a young Manchester United side lose to Aston Villa in the 1995-96 season. Manchester United of course went on to win the season and dominate the league for the next decade and a half. Nobody predicted a Leicester City win in 2016 of course, but win they did. The continuous and vertiginous increase in TV income for football clubs has led to a relatively more equal playing field when it comes to global scouting networks, so a great player can pop up in any team and surprise the league. Yet we find it hard to ignore all the underlying trends and often find ourselves guilty of treating incidents as trends.

The opposite, is amazingly, also true. We are so caught up with trends that we don’t factor in the kinks in the curve. Or to use Steve Jobs’ phrase – the ding in the universe. You can say that an iPhone like device was sure to come along sooner or later. But given the state of the market – with Nokia’s dominance and 40% global market share, you would have bet your house on Nokia producing the next breakthrough device eventually. Nobody saw the iPhone coming, but when it did it created a discontinuous change that rippled across almost every industry over the next decade. The thing is, we like trends. Trends are rational and they form a kind of reassuring continuity so that events can fit our narratives, which in turn reaffirm our world view. And unless we’re close to the event, or perennial change seekers and nomads ourselves, it’s hard to think of countercyclical events. It’s now easy to see how in 2016 we were so caught up in the narrative of progressive liberalisation and unstoppable path to globalisation, we failed to spot those counter-cyclical events and cues that were right there in our path.

In fact there are any number of cognitive biases we are guilty of – on an everyday basis. This article just lists a dozen of them. My favourites in this list are the confirmation bias and the negativity bias. Both of these are exacerbated by social media and digital media. While social media has led us to the echo-chambers – the hallmarks of 2016, our projection bias is also accentuated by our ability to choose any media we want to consume, in the digital world, where access is the easy part. Similarly, bad news spreads faster on social networks and digital media today than at any time before in history. Is it possible that despite knowing and guarding against these biases in the past, we’ve been caught out by the spikes in the impact and incidence of a couple of these, in the digital environment we live in today?
To be fair, not everybody got everything wrong. Plenty of people I know called the Donald Trump victory early in the game. And amongst others, John Batelle got more than his share of predictions right. There is no reason to believe that 2017 will be any less volatile or unpredictable than 2016, but will our ability to deal with that volatility improve? One of the more cynical tricks of the prediction game is to make lots of predictions at many different occasions. People won’t remember all your bad calls, but you can pick out the ones you got right, at leisure! This is your chance, then, to make your predictions for 2017. Be bold, be counter-cyclical. And shout it out! Don’t be demure. The monkey is history, after all. This is the year of the rooster!
Advertisements

2016/2017 Shifting Battlegrounds and Cautious Predictions for Digital

Innovation slows down in mobile devices but ramps up in bio-engineering. Voice goes mainstream as an interface. Smart environments and under the hood network and toolkit evolution continues apace.

For most people I know, 2016 has ranged between weird and disastrous. But how was it for the evolution of the digital market?

The iPhone lifecycle has arguably defined the current hypergrowth phase of the digital market. So it’s probably a good place to start. In the post Steve Jobs world, it was always going to be a question about how innovative and forward thinking Apple would be. So far, the answer is not very. 2016 was an underwhelming world for iPhone hardware (though Apple has tried harder with MacBooks). Meanwhile, Samsung which you suspect has flourished so far by steadfastly aping Apple, ironically finds itself rudderless after the passing of Steve Jobs. It’s initial attempts at leapfrogging Apple have been nothing short of disastrous with the catastrophic performance of the new inflammable Note phones/ batteries. Google’s Pixel Phone could hardly have been timed better. By all initial accounts (I’m yet to see the phone myself) it’s comparable but not superior to an iPhone 7, Google’s wider range of services and software could help it make inroads into the Apple market. Especially given the overwhelming dominance of Android in the global OS market. The market has also opened up for One Plus, Xaomi and others to challenge for market share even in the west. Overall, I expect the innovation battleground to move away from mobile devices in 2017.

While on digital devices, things have been quite on the Internet of things front. There have been no major IOT consumer grade apps which have taken the world by storm. There have been a few smart home products, but no individual app or product stands out for me. As you’ll see from this list – plenty if ‘interesting…’ but not enough ‘wow’. I was personally impressed by the platform capabilities of enabling IOT applications, form companies such as Salesforce, which allow easy stringing together of logic and events to create IOT experiences, using a low code environment.

AR and VR have collectively been in the news a lot, without actually having breakthrough moment. Thanks to the increasing sophistication of VR apps and interfaces, with Google Cardboard and the steady maturing of the space. But the most exciting and emotive part of AR / VR has been the hololens and holoportation concepts from Microsoft – these are potentially game changing applications if they can be provided at mass scale, at an affordable cost point and if they an enable open standards for 3rd parties to build on and integrate.

Wearables have had a quiet-ish year. Google Glass has been on a hiatus. The Apple Watch is very prominent at Apple stores but not ubiquitous yet. It’s key competitor – Pebble – shut shop this year. Fitbits are now commonplace but hardly revolutionary beyond the increasing levels of fitness consciousness in the world today. There are still no amazing smart t-shirts or trainers.

The most interesting digital device of 2016 though, has been the Amazon Echo. First, it’s a whole new category. It isn’t an adaptation or a next generation of an existing product. It’s a standalone device (or a set of them) that can perform a number of tasks. Second, it’s powered almost entirely by voice commands “Alexa, can you play Winter Wonderland by Bob Dylan?”, third, and interestingly it comes from Amazon, for whom this represents a new foray beyond commerce and content. Echo has the potential to become a very powerful platform for apps that power our lives, and voice may well be the interface of the future. I can see a time the voice recognition platform of Echo (or other similar devices) may be used for identity and security, replace phone conversations, or also become a powerful tool for healthcare and providing support for the elderly.

Behind the scenes through there have been plenty of action over the year. AI has been a steady winner in 2016. IBM’s Watson added a feather to it’s cap by creating a movie trailer. But away from the spotlight, it has been working on gene research, making cars safer, and even helping fight cancer. But equally, open source software and the stuff that goes behind the websites and services we use every day have grown in leaps and bounds. Containerisation and Docker may not be everybody’s cup of tea but ask any developer about Docker and watch them go misty eyed. The evolution of micro services architecture and the maturing of APIs are also contributing to the seamless service delivery that we take for granted when we connect disparate services and providers together to order Uber cabs via the Amazon Echo, or use clever service integrators like Zapier

All of this is held together by increasing focus on design thinking which ensures that technology for the sake of tech does not lead us down blind alleys. Design thinking is definitely enjoying its moment in the sun. But I was also impressed by this video by Erika Hall that urges us to go beyond just asking users or observing them, and being additionally driven by a goal and philosophy.

2016 has also seen the fall of a few icons. Marisa Meyers has had a year to forget, at Yahoo. Others who we wanted to succeed but who turned out to have feet of clay, included Elizabeth Holmes at Theranos, and the continued signs of systemic ethical failure at Volkswagen. I further see 2016 as the year when external hard drives will become pointless. As wifi gets better, and cloud services get more reliable, our need to have a local back up will vanish. Especially as most external drives tend to underperform over a 3-5 year period. Of course, 2016 was the year of the echo-chamber – a reminder that social media left to itself insulates us from reality. It was a year when we were our worst enemies. Even through it was the Russians who ‘Hacked’ the US elections and the encryption debate raged on.

One of the most interesting talks I attended this year was as the IIM Alumnus meeting in London, where a senior scientist from GSK talked about their alternative approach to tackling long term conditions. This research initiative is eschewing the traditional ‘chemical’ based approach which works on the basis that the whole body gets exposed to the medication but only the targeted organ responds. This is a ‘blunt instrument’. Instead, the new approach takes an ‘bio-electronic’ approach. Galvani Bioelectronics, set up in partnership with Alphabet will use an electronic approach to target individual nerves and control the impulses they send to the affected organ, say the pancreas, for diabetes patients. This will be done through nanotechnology and by inserting a ‘rice grain’ sized chip via keyhole surgery. A successful administration of this medicine will ensure that the patient no longer has to worry about taking pills on time, or even monitoring the insulin levels, as the nano-device will do both and send results to an external database.

Biotech apart, it was a year when Google continued to reorganise itself around Alphabet. When Twitter found itself with it’s back to the wall. When Apple pondered about life beyond Jobs. Microsoft emerged from it’s ashes, and when Amazon grew ever stronger. As we step into 2017, I find it amazing that there are driverless cars now driving about on the roads, in at least one city, albeit still in testing. That we are on the verge of re-engineering the human body and brain. I have been to any number of awesome conferences and the question that always strikes me is, why aren’t we focusing our best brains and keenest technology on the worlds greatest problems. And I’m hopeful that 2017 will see this come to fruition in ways we can’t even imagine yet.

Here are 5 predictions for 2017. (Or around this time next year, more egg on my face!)

  • Apple needs some magic – where will they find it from? They haven’t set the world alight with the watch or the phone in 2016. The new MacBook Pro has some interesting features, but not world beaters yet. There are rumblings about cars, but it feels like Apple’s innovation now comes from software rather than hardware. I’m not expecting a path breaking new product from Apple but I’m expecting them to become stronger on platforms – including HomeKit, HealthKit and to seeing much more of Apple in the workplace.
  • Microsoft has a potential diamond in LinkedIn, if it can get the platform reorganised to drive more value for its, beyond job searches. Multi-layered network management, publishing sophistication, and tighter integration with the digital workplace is an obvious starting point. Microsoft has a spotted history of acquisitions, but there’s real value here, and I’m hoping Microsoft can get this right. Talking about Microsoft, I expect more excitement around Hololens and VR based communication.
  • I definitely expect more from Amazon and for the industry to collectively start recognising Amazon as an Innovation leader and held in the same esteem as Apple and Google. Although, like Apple, Amazon will at some point need stars beyond Bezos and a succession plan.
  • Healthcare, biotechnology, genetics – I expect this broad area of human-technology to get a lot of focus in 2017 and I’m hoping to see a lot more news and breakthroughs in how we engineer ourselves.
  • As a recent convert, I’m probably guilty of a lot of bias when I pump for voice. Recency effect, self referencing, emotional response over rational – yes all of the above. Voice is definitely going to be a big part of the interface mix going forward. In 2017, I see voice becoming much more central to the interface and apps planning. How long before we can bank via Amazon Echo?

Happy 2017!

Dear Uber, It’s Time For a Man To Man Talk.

 
Dear Uber, 
 
I feel it’s time to have a man to man talk. Or, to put it more pedantically, a human being to voraciously ambitious young company focused on world domination on other people’s money, at the significant cost of human dignity talk. 
 
I mean, wtf people? You launched a taxi service in New Delhi, in India, a city where women till recently balked at the idea of getting onto public transport for fear of being groped or touched. Where carrying a safety pin was considered prudent in case of the need for self defence. Where most women think many times about taking any form of transport alone, after dark. A city that has witnessed horrific crimes on women in frighteningly recent memory. Into this environment, you dropped your oh-so-convenient taxi service which makes no more than mandatory background checks and takes little responsibility. Did you really think this was going to go well? 
 
I am a huge supporter of your technology. I’ve written about it here and here. I’m also a huge supporter of your service, in general. I’ve even compared it to impressionism, and that’s pretty high praise. So this is not some disgruntled rant by somebody who doesn’t get technology and wants to stand in the way of progress. Just wanted to get that out of the way before we go any further. 
 
Because the problem is obviously not limited to New Delhi or any specific place, as this terrible incident from Boston suggests. Let’s face it. If, hypothetically speaking, I had evil intentions, becoming an Uber driver may just give me the kind of opportunity my imaginary dark side craves. After all, you take no liability so you naturally are more carefree with your background checks. 
 
The irony is that technology could actually make it much safer to take a cab. To start with you might consider using blackboxes instead of consumer grade mobile devices, as these guys do. These would be much harder to turn off. Second, and even with your current set up, the moment a GPS device gets turned off for more than a minute, a red flag should go off, triggering a call to the driver and the customer. If they don’t get answered, that should be a call to the police. You should also be able to track if the car has gone significantly off the path indicated by the customer, and you’d probably be able to mark off busy and lonely spots on a map. I mean there must be a hundred other ways for smart technologists like you to make your journeys safer. 
 
I’m sure world domination is within your reach, but it seems like you are your adolescent worst enemy right now. After that ill-advised rant against a woman journalist, and that completely over the top idea to track down another journo, with carefree ignorance about privacy, you’d have thought that you would have battened down the hatches and focused very hard on doing the right thing. 
 
But apparently not. 
 
May I suggest that you seriously consider appointing a Chief Ethics Officer? I appreciate that this may cause some confusion with your existing CEO, but from where I stand, your Ethics Officer may just be the most important person in the company right now, given that he/she stands between you and implosion. I get that you’ve hired a Chief Privacy Officer, but I think that ship has sailed. I also understand that you’ve got helpful lists for customers such as this one. But isn’t that a bit like a giving out road safety instructions whilst dishing out licenses to dangerous drivers? 
 
I sincerely hope you go from strength to strength and I will likely be availing myself of your service at certain moments. There is simply no better way to get home from Heathrow at the end of a long day. Btw, the Spotify integration was cool, and fun. 
 
However, I will be advising all my women friends to stay away from Uber for a while until you’re able to demonstrate that you do really care. And there will be all those other times when I could use your service, or not. And I probably won’t. And you do realise that you can’t aim for world domination by being a last resort. 

Internet of Things – Hype & Hope

(I had the privilege of speaking about IOT at the Oxford Technology and Media forum yesterday. What follows is the gist of my session and some thoughts from the panel discussion)

The tech industry is often guilty of pushing technology solutions to consumer without focusing on the benefits, the emotions and simplicity. Invariably, businesses that get it, do better at selling tech to consumers. Apple are clearly the masters at it, but UK customers will know that after many years of ‘interactive television’ discussions, what customers bought were ‘sky plus’ and ‘red button services’. (The technology didn’t actually deliver on the promise, but that’s a different story).

So we come to the Internet of Things and I believe, we’ve swung to a different end of the pendulum. We’ve created a pithy, catchy phrase, something that everybody can relate to and not be daunted by the jargon. I would personally have preferred the internet of stuff (stuff is cooler than things). But the internet of things means (pardon the expression) bugger-all when it comes to actually buying, implementing or solving something.

Maybe I’m being harsh. It’s a catch-all word conveying a general wave of technologies much like “digital convergence” in the broadcast and comms space. But it’s a very loaded phrase and masks many layers of complexity that haven’t yet been resolved to the point where they can be implemented. Or even understood by the consumer.

The IOT includes communication between machines, between people and machines, and also between people and people via machines. It includes wearables, and all manners of sensors, and an ever increasing ocean of data, an implicit assumption of an economically viable, reliable and available network. And so far, very few standards.

After all, we’re all spoilt by the Internet – in the world of standards driven browsers, we only had to worry about the browser environment. The most complex questions in the early days of the web included ‘web safe’ colours. And later, pushing the limits of HTML. You never had to think about the OS, the device (are you viewing the website on a Dell or IBM laptop?) You didn’t have to think about whether the user was sitting or standing or walking around. And all you had to know was a URL, and the internet would find the website from over 50 million computers in a fraction of a second. Even transactions and ecommerce are now taken for granted. 

In the IOT world, all these are non-standard and have to be thought from scratch. What’s the user interface of a ‘thing’? If it’s a sensor on a coffee machine vs a door, how should we access the data, how can interact with the thing? The design challenge moves from an ‘interface’ design to an experience and even environment design. Who designs the experience of walking into a retail store which is armed with iBeacons or other sensors? Design challenge will range from fitting an antenna while managing heat dissipation, to figuring out how to retail product aesthetics while adding a bunch of tech.

Service design has been a term in vogue for a few months now, but is fundamental to the creation of IOT models. We must take a design centric view and build from there. That’s the only way we’ll get around to focusing on the right problems to solve, to ensure adoption.

As with all emerging technologies, we’re in the world of ‘compound change’ – where each layer builds on previous layers, and so it creates an exponential change curve, which is near impossible for us to predict, since we’re still very used to thinking in linear terms. What is intuitive to me, is that we’ll get entirely new companies dominating the IOT space, in the way that FB, LinkedIn and Twitter dominate the social sphere, and Google and Amazon dominate the web, Apple and Samsung dominate mobile devices and Microsoft and Intel dominated the Desktop world.

Because, this will take a whole new business model. It will shift value, destroy old models and create entirely new services. Most often, we think of new tech as better ways of doing what we do today. So the ‘better’ model leads us to thinking about how our fridge will tell us when it’s out of milk. Rather than ‘different’ models – perhaps our fridge telling us which of the foods we’re storing has the earliest use-by date, so we can modify our consumption appropriately. Or other more imaginative and useful behaviours.

Undoubtedly the way in which business models will evolve will involve adding layers of services to existing and new products. The value of the service will outstrip the value of the product. You may pay more for the service of tracking your weight and the feedback on your lifestyle and diet, than you do for the weighing scale itself. In fact asset ownership models may change, with companies willing to give you the asset for free in order to lock you into the service, or simply, follow an asset leasing model, which brings down your outlay but enables longer term revenue stream for the seller. Soon we should be able to view this information and services layer explicitly and this explicit-isation of the service and information layer may be one of the biggest sources of consumer value in the IOT. This would enable us to understand better the total cost of any product (say a sweater, or a vaccuum cleaner) and make different choices on that basis. It would also align value realisation with costs – imagine a washing machine which you lease and pay per use.

Although it’s tempting to consider just the things we acquire and own, there are all those things we use, which form the asset base for service delivery, from smart meters, to hotel rooms and railway stations to rented cars. These can all also follow the same principles of creating explicit service and information layers, so that maintenance, usage, and cost and value can all be tracked more easily. Then you have natural resource and public environments – weather, floods, pollution tracking, and more.

As has been noted, it is almost impossible to talk about IOT and emerging technology of any kind without talking about data, privacy and security. I used to think, like everybody else, about a data brokerage, or info-mediary. Now I think data-brokerage should be a feature built into every product. A data brokerage module will ensure that consumers data is stored, transacted and valued in a way that is fair to both sides, and in a transparent manner. Really, you can’t ask for more than that.

Undoubtedly the IOT is a big deal. We’re talking about billions of connected devices changing the way we live our everyday lives. The transformativer potential of this can barely be imagined. I just hope we use this to solve some of the bigger problems we face – the energy crisis, caring for an ageing population, getting supplies more efficiently to the needy, across the world. And not spending too much time debating whether our kettle should gossip with our washing machine.

Resisting Technology Is Like Resisting Ageing

My wife (Karuna) and I often differing views on a number of things, as is common. And almost always, she’s right. But there are some areas where we agree to disagree. 
 
Karuna doesn’t drive a manual car. She’s very comfortable in an automatic. I love driving – either manual or automatic. Obviously, the automatic car is doing a whole lot of thinking for you. And probably doing a few things better. By matching the gear to the speed more effectively, it’s likely to be more fuel efficient especially in stop-start city driving. But like most people who drive a manual car, I hunker for the control of the stick shift and the level influence I have on the drive. It feels like I’m closer to the engine. The automatic car provides a level of abstraction and let’s anybody drive, without mastering the intricacies of gear shifts and clutch control. Somewhere in the recesses of my mind, the message keeps flashing: the automatic is all right, but a manual car is a real drive. 
 
We have the opposite stances on Digital Cameras. As somebody who has formally learnt photography and spent time in dark-rooms developing prints, she loves the control, and human input into the process. I enjoy the fact that I can get great photographs by just framing the picture. Karuna gave me tips on framing but the camera does the rest – i.e. manage exposure, focus, lighting, and even the intensity and balance of colours. Of course, all of this comes bundled with a phone. No more wandering around with an SLR camera slung around your neck. I love it. For her its anathema. 
 
The pattern here is simple, when we invest time and effort in building a skill, or a technique, we are invested in the process, not just the output. And what almost every technological advancement tends to do, is that it democratises is previously closely held skill, putting the same level of competence into the hands of amateurs and novices. For the experts this is distasteful or downright annoying, but more importantly, it’s often professionally disruptive. The former, because it devalues that expert process which we are attached to, and the latter, because it challenges their expertise and renders them less valuable. 
 
“The Knowledge” is the course that all London Black Cab drivers go through. For decades, the London Cab has been famous – one of the icons of the city. Apart from the car itself, which is custom designed and manufactured for the purpose, the drivers are famed for their familiarity with the city and routes. The Knowledge comprises some 320 routes through London, and covers 25,000 streets and 20,000 landmarks. A black cab driver is expected to know them all. Qualifying takes 2-4 years on average. During the exam, they can be given any start point and end point in those hundreds of routes and they are expected to know the most efficient way of getting from start to finish. The number of qualified drivers is controlled. Typically, it takes an investment of 30,000 to become a cab driver, in addition to the 25 hours a week time invested over 3 years.  Typically, the London Cab is twice the price or more for journeys that take 30 minutes or more, compared to the privately run ‘mini-cabs’ that also operate in an organised manner in London. 
 
Since the dawn of sat-navs any driver can find locations, routes, and optimise journeys with an investment of under a hundred pounds. Nowadays the smartphone does just as well. Today every user who gets into a taxi is more likely than not to have a device with him or her that can provide exactly the same level of knowledge about routes, directions and traffic conditions that the black cab driver has accumulated over 3 years. Short of injecting this knowledge into the brain, a la Matrix, the first time tourist in London is now as well equipped to navigate London as the black cab driver. 
 
Of course, you still need to get a taxi, and the black cabs are ubiquitous in London so you’re likely to hail one anyway. Or you would, till the arrival of the brigade of taxi apps. And the poster child of taxi applications – Uber. Now you just send up a digital flare while you’re working your way through dessert and you can be sure that by the time you’re out on the street, the taxi is likely to be there. Not a London Cab but a less expensive car with a similar assurance of safety and comfort. 
 
Not that London Cabs are luddites. The Hailo and Gettaxi  pps do exactly this for black cabs. The whole experience of calling a taxi has changed forever. You just broadcast a request and one of the many taxis which is the closest to your location responds. It’s the same for any category of cabs. Even the cab companies which take bookings do so through apps. It’s just that the price premium charged by London cabs is no longer sustainable. 
 
There are plenty of other services run by local cab companies which come with Apps. I use a company called Swift  which has a reliable app and also one of the drivers, let’s call him Bob, asks me about it whenever he picks me up. The last time around we had a discussion about some of the features that the app should add. He is very engaged with the idea of the app making this experience better. 
 
As I write this, all over the world, incumbent taxi services are warring with new services such as Uber and Lyft. Which are by the way, just marketplaces, and not car services, themselves. And clearly much of the legislation does not cover this model. So the incumbent services are lobbying the government for protection. In Germany, a cab license costs over $ 250,000. Understandably, drivers having paid that sum are not happy to see their returns diminished via competition from new and technologically enabled entrants. Many cities including Munich, Dusseldorf, Berlin and Hamburg are considering declaring Uber illegal. Their argument is primarily that as taxi services, Uber enabled cabs should pay the same license fee. 
 
In Seoul, the government’s concerns are based around the safety of the vehicles, background checks on drivers, and the impact on the local taxi trade. The last may be the most honest reason, in most parts of the world. Even though in Seoul, Uber is more expensive than the regular taxis. 
 
Even at home, in the US, Uber has faced the law – in Virginia for example, where Uber has been asked to ‘cease and desist’ by the government, till it obtains the ‘proper authority’. 
 
Brussels has already banned Uber. Barcelona, Paris and other major European cities have discussed banning it. There have been strikes in London and Milan. All of these are typically examples of old markets and legislation trying to keep up with new business models. Even Neely Kroes has criticised the bans.
 
The pattern that repeats itself is that markets switch quickly, but legislation takes time. Most taxi apps now allow sharing, payments, and a host of other features which significantly improve the experience for the user. 
 
Defending the old model even in the face of new technology creates a precipice from which the fall can be sudden and dramatic – witness the music industry, which reaped the benefits of digital technology for many years but failed to adapt to the internet’s new models. People find ingenious methods for using the new technology to the benefit of suppliers and customers, even as regulators and enforcers fume. 
 
So where does that leave the Black Cab driver who has just spent years mastering “The Knowledge” to qualify to drive a black cab in London? Is this the end of the road for him? Is this one more example of technology rendering a valuable skill useless? 
 
Your guess is as good as mine, but for a glimpse of what could happen, let me take you back a hundred and fifty years or so. It was the time of the invention and spread of photography. I’ve written about this in more detail here but the short version is this: photography democratised portraiture. And rendered hundreds of artists jobless. Any amateur armed with a camera could take a photo more accurate and lifelike than the best of painters. So what did these artists do? Many presumably changed professions, some undoubtedly fell on hard times. But out of this some decided that their role was not to represent reality but to interpret it. It is no surprise therefore that the birth of impressionism coincided with the spread of photography. 
 
So when democratisation hits your area of expertise, as it will, sooner or later, will you find yourself with a choice of extinction or adaptation. Will you be like the impressionists and evolve? Or will you fall on your sword (or paintbrush)? Will you look for help to regulators? Or will you create new markets? After all, even decision making and ‘management’ expertise, is being democratised through analytics and knowledge systems. 
 
Either way, the challenge for regulators as always, is to move at the pace of technology and markets. The challenge for you is to evolve to find or create a market as technology democratises your specialist skill. Resisting the change, though, is not really an option. You might as well try to resist ageing.  

Revolution By A Thousand Digital Cuts

You can say that with every new wave of technology, the format, structure and even the content of media changes. Or you can simply say that the medium is the message. Either way, publishing just isn’t what it used to be. Here’s my wide angle view of some of the many changes bubbling through the publishing world. 

Self publishing in all it’s many forms continues to flourish and grow. Just when you thought that between WordPress and Blogspot, that was all there was to it, you now have a raft of new platforms, including Medium and Hi for example. I’m still working out the real value of these tools, apart from the fact that they are beautifully laid out and easy to set up for the infrequent writer. I don’t feel yet that the writing on these platforms are genetically better than those on any other blogging platform. But clearly there’s a big market for self expression and this is really a good thing. 

If Blogging is the selfie of publishing, the social media must be its new mirror. As much as social media is a means of connecting with others, in an increasingly perverse way, it seems to also represent how we see ourselves through their eyes. This kind of voyeuristic narcissism is reflected in many ways but none more so than this strange and increasing trend of self categorisation through arbitrary & meaningless ontologies, abbreviated for the purpose of this discussion as ‘scamo’. Facebook is full of scamos. What colour is your aura? Which Downton Abbey character are you? Which superhero are you? Which city are you? Where will it end? Which lego piece are you? (I got the flat green base), which paper size are you? (I got A3 – gutted!) Which bollywood item girl are you? (I got Yana Gupta but I can’t see the resemblance). And definitely, which type of common cold virus are you? (I got adenovirus, so distinguished!) See how easy it is? You can make your own lists up as you go. You can see where this will end, right? Which scamo are you? 

The disturbing trends don’t end there. I pray every morning that the phrase “what happens next will xxxx you” (insert appropriate transitive verb) could be banned from headlines. “This surgeon started to operate on his patient. What happened next will amaze you!” “This man tried to drink printing ink. What happened next will stun you”. “This man tried to click on this link in face book. What happened next will baffle you” But of course, in the click-economy, it’s all fair game. As long as it can make a few (million) people click on the link, it’s a successful headline. 

To be fair, the original story headlines were competing with other headlines in the same newspaper, because typically, you would already have bought or be reading that paper. Or at best your lead story would be competing with the main headlines of other papers. Today you’re competing with a stream of consciousness flow of headlines, curated, ad-inserted, thrust upon you by friends and family and wished upon yourself by those links you clicked unsuspectingly last week. Forget 15 seconds of fame, each headline gets about 1.5 seconds before it’s history in your eyes. 

Facebook and Twitter have definitely become strong sources of news curation for me, much more so than TV or any single media organisation. But in the lead are still aggregation apps such as Flipboard and Zite. I was quite gutted to learn that Zite had sold itself to Flipboard, as they represented 2 very distinct kinds of approaches, both valuable, and I’m fretting that the result will be a bit of both but neither to it’s optimum levels. Circa is quite interesting (thanks @maria_axente!) because it allows you to track one story as it evolves. 

But then you have tools like Paper.li and others which allow the aggregation itself to be automated in a democratised format. This is basically software eating media eating software. Pretty soon the only real value will lie in original content. This is why players like the Economist, New York Times or HBR are likely to have long lives because they are effectively becoming the HBO’s of the publishing space. Everybody else is aggregating, sorting, distributing, dicing and slicing. It’s worth pointing out that players like Business Insider, Outbrain and Bleacher Report,  are trying hard to build a business that truly exploits the new distribution in different ways. But while Business Insider, which was founded by Henry Blodget and counts Bezos as an investor, does focus on good and original content, the others seem to want to flood you with content with those titilating headlines, so they can financially surf the click economy. 
 
One of the outcomes of the smartphone enabled and Instagram and Facebook fuelled environment we live in, is that I’ve come to expect pictures where a picture is required. As a consequence, I struggle through long descriptive paragraphs in books. I appreciate that it would have been hard for Dickens to simply Instagram a picture of foggy London, and perhaps we’re the better off for it. But with the increasing democratisation of tools and knowledge of image and video creation I think a new wave of storytelling is just around the corner. A book created for electronic consumption should have pictures, videos, hyperlinks and more, all of which are creatively and effectively used to enhance the storytelling. This is not to denounce the metaphor as a bedrock of great content, but to provide an experience that truly exploits the medium. Just imagine Gibson’s Neuromancer told through this kind of multimedia! 

Certainly, there’s plenty of excitement and enough funding for new ventures in news and publishing, as this piece from the FT points out. There is ongoing innovation as start ups like Blendle demonstrate. I personally think that we are just at that point where we start to appreciate the value of news as distinct from entertainment, and stop clubbing the business models together. Certainly, when ‘rock star’ geeks such as Bezos and Nate Silver are getting into the game, there must be plenty left to play for! 

In the mean time, I’m off to post my selfie to Facebook and Twitter, so I can see what others say about it, so I can figure out which mythically egocentric character I resemble the most.