6 Business Lessons from the World Cup 2018

world cup russia
I spent 12 days in Russia, watching some football games and soaking up the carnival atmosphere of the World Cup. While there are many personal moments I will cherish, I’m also reflecting back on some professional and business lessons I took away from this once-in-a-lifetime experience.

(1) Clockwork Efficiency

Some 20 minutes before the World Cup final was due to start, Ronaldinho had just exited, there were performers on the pitch, the flags had to be unfurled, the anthems still had to be played, and there was mayhem all around. Yet the game started on the dot at 6 PM local time. This was true of almost every game – the last 30 minutes leading up to kick off looked chaotic, but on reflection were running like clockwork. Presumably because all the hundreds of people involved knew their cues, somebody was watching the clock, and they had practiced this to death. Kudos to FIFA and the organisers for this display of causal efficiency. Bottom line, you don’t have to go all military to retain operational efficiency. Most large events or businesses have a similar veneer of chaos, but you can always tell the ones that are operationally tightly run by the extent to which they keep to time.

Even getting close to a hundred thousand people in and out of stadiums, from and to metro stations was managed quite effectively with the help of hundreds of volunteers. Interestingly, there were a lot of police and uniformed, military or paramilitary personnel but the primary interaction with the spectators was through the volunteers. So people had a pleasant interaction with a bunch of young people, while being clear that any bad behaviour would be dealt with. The additional lesson here is to distinguish between the experience and the governance.

(2) The Impact of Willful Internal Strife

One of the stories of the world cup was the extent to which internal strife impacted the performance of the tournament favourites. The two examples here are the last two World Cup winners – Germany and Spain – and both provide interesting insights. On the eve of the World Cup, Spain hit a snag. Real Madrid announced Lopetegui, the Spain coach, as their next manager. The timing could not have been worse, and Madrid probably had their internal political issues to address. But it was done unilaterally and without consulting the Spanish FA. Consequently, the president of the FA removed Lopetegui from his post and appointed Fernando Hierro in his place. Hierro was already a part of the coaching set-up and no outsider, but it was a big ask at a critical juncture. When Spain played against Russia and passed the ball aimlessly for 120 minutes, you could question why he didn’t come up with a different tactical model or instructions. Why did he play with Koke, a defensive midfielder who was singularly unadventurous, rather than add to his goalscoring firepower? Perhaps he was adopting a safety-first approach, being a new manager? Perhaps he was out of his depth? Or the fact that Hierro was himself a defensive midfielder may have had something to do with his approach? You can question the set of decisions leading up to Hierro’s appointment, but it was made on principle, and Spain suffered.

Germany had the same manager – Loew – who has been at the helm for one of the most successful periods in their footballing history. But their split was more internal. Following an appearance with Erdogan, the Turkish president, there was a backlash against 2 players of Turkish descent, namely Ozil and Gundogan, and reports emerged of a split between the Bavarians and the others in the team. Again, Germany’s disjointed performance was obvious for people to see and we are still seeing the aftermath of this with the German football fraternity split over Ozil’s decision to leave the national football team.

The obvious lesson is that no organisation can perform if there are significant internal schisms or disruptions to the operating model at critical junctures. If either of these incidents had happened 6 months ago, you could argue that both these teams would have gotten over them by the time the cup came around. But the more interesting question here is about principles. The Spanish FA took a stand that on the surface was a principled one (though some called it an ego issue), and paid the price. The German players of immigrant origins were dealing with their own identity issues. The bigger lesson to be learnt here is how to deal with these kinds of issues without letting it impact performance. Any answers?

(3) Leadership Lessons

Ms. Kolinda Grabar-Kitarovic was a one-person leadership lesson through the late stages of the tournament. She stood with the fans in her team jersey, as one of the people. She visited the Croatian team after the games in their dressing room after their game with Denmark, with no pomp and ceremony, to congratulate them. And on the final, as the rain came down she stood in the rain hugging her own players, and the winning French team members in a heartwarming act of humanism. Vladimir Putin deserves credit for Russia delivering a wonderful World Cup, but on that stage, he was overshadowed significantly as he stood under the umbrella, while Macron and Grabar-Kitarovic shared the rain with their compatriots.

In a sporting event so dominated by men that no other woman has been anywhere close to centre-stage, it was also quite dramatic to see Grabar-Kitarovic be effectively the only woman who was celebrated over the course of the 5 weeks of the World Cup. It was a gilt-edged opportunity to stand out and Ms. Grabar-Kitarovic rose to the occasion in all her red-and-white-chequered glory.

(4) Smart Branding

As a fan, it has always bothered me that FIFA so tightly control the branding and use of the World Cup, in connection with their Corporate Sponsors. This time, being there, I felt the benefit of the strong branding. Right from after the 2014 event the branding for the 2018 event has been in force. The Dusha font was specifically developed to work across English and Cyrillic scripts, with Asian overtones, and has been consistently used everywhere. You may have seen this in all the FIFA World Cup images and events. And the control of the branding means that the colours, fonts, and logos are instantly recognisable as official. Whether they are signs painted on the floor of the metro stations, or they are on Television, the strong branding has played a role in the instant recognition of the World Cup event, locations, merchandise, and signage. I have to say that this significantly helped to identify signage and directions around the World Cup cities as well.

(5) Harnessing the Collective

The World Cup on the pitch was clearly a victory for the collective. The GOAT (Greatest of All Time) aspirants all went home early. Messi, Ronaldo, Neymar et all were back home before the semifinals. Croatia was the very embodiment of the collective over the individual. France too were a team where individuals were all able to put the team above themselves. Whether it was Mbappe, or Pogba, or Griezman, you could see they were all playing for the team, not for personal glory. While it was reassuring to note that teams still win over individuals, it begs the question of what great teams and managers do to instill the right culture. The France manager, Deschamps was himself a World Cup-winning player, but played in an unglamorous role, which was derided by Eric Cantona as a ‘water carrier’. The Croatia manager was largely unknown before the World Cup. If anything, this eschewing of the star culture was a real reassurance for those who have always believed in the team over the individual.

This isn’t a call for removing individualism. Great performers will always shine – as Modric, Mbappe, and Pogba all did, but great teams are built around overarching team ethics that both elevate and subsume the stars. That is the mark of great management.

(6) Technology Wins

One of the biggest talking points in the World Cup was the use of VAR. FIFA made a bold move to implement VAR (Video Analysis Replays) in the World Cup which allowed referees to use video replays where there was a reasonable doubt about the decision. A bank of assistant referees was continuously watching the replays and signaling to the on-field referee where there was a potential need for a replay.

In the main, the technology was a success. A number of decisions were given or reversed based on the video evidence. By and large, the right result was achieved. You would have to say this was a big step forward for football. The one notable case where people had doubts was the call in the final to award France a penalty. Still, you can say that the referee looked at this closely and from a number of angles, and made his best call. The rules in football still leave room for subjectivity and judgment, so this allows the referee to base that judgment on more data than ever before.

If anything, FIFA have erred in setting the expectations of this technology by announcing that only clear and obvious cases will be reviewed. Why? The only driver for the technology should be that the right decisions are reached every time, as best as possible. This idea of ‘clear and obvious’ has led to a lot of quibbling between pundits on panels. Whereas really, all we should be concerned with is that the technology enabled the right decisions to be achieved, and in 9 out of 10 cases was an unmitigated success.

One area where more work needs to be done, and a point I’ve argued before, is the ability of VAR to engage the audience, especially in the ground. The tv audience gets to see the replays, as do the referees. There’s no reason why the audience at the ground shouldn’t see the same replays in the way tennis and cricket audiences do. If anything this reflects on the organisers’ view of the maturity of the spectators to behave with decorum no matter what the final decision of the referee is.

Introducing technology is always fraught with risk, as most businesses know. The lesson here is to be clear about the value of technology and data on better decisions, and also focusing on the user experience of customers.

On a side note, the other big technology winner at the World Cup was Google Translate. A point already highlighted in the Guardian newspaper. Google translate was the saviour for the hundreds of thousands of people coming to Russia from across the world. A quarter of a million people just from South America alone, for example. Most of these people were communicating with shopkeepers, taxi drivers, and restaurant staff via Google Translate and doing so quite effectively. Our taxi driver from the airport to our apartment in Moscow had google translate set up so he would talk into it in Russian while driving and it would repeat the words in English. He even managed to point out some sights and tell us a couple of jokes, over the course of our half hour journey.

There are probably many more if you look hard enough, but these are the ones that stand out for me. What are yours?
Advertisements

20 Takeaways From The CogX Event, London

IMG_5043First, a clarification. I visit events such as the CogX events to be stimulated, to have new thoughts and to have the neurons in my brain fired in new ways. I go to learn, not to network. So I agonise over sessions to attend, and importantly the sessions I miss. Of which there is an overwhelming majority, as this event was running 5–7 conference tracks at any time, as well as a lot of other small stage events. By and large though it’s a weirdly monastic experience, surrounded by people, but very much alone in my head, to the point where I’m actually a little bit annoyed when somebody wants to talk to me! This then is the list of things that made me think.

  1. If there was one session that made attending the event worthwhile for me, it was Zavain Dar’s session on the New Radical Empiricism (NRE). His argument is that the traditional scientific method is based on certain rational assumptions — which are now challenged. In the classic method, you would hypothesise that the earth was round, find the right experiments to run, collect data and prove/ disprove your hypothesis. This runs into trouble when the computational models are too complex and / or changing too often — such as gene sequencing or macroeconomic data. Also this is not efficient when the range of options is vast and we don’t know what data might be relevant — e.g. curing cancer. The traditional methods may yield results, but it might take a lifetime of research and work to get there. What Dar calls the NRE is the opposite — a data driven view which allows machine learning to build hypotheses based on patterns it finds in the data. So in the NRE world, rather than starting with whether the earth is round, you would share a lot of relevant astronomical data and ask the machine to discover the shape of the world. This approach works best in areas where we have a data explosion such as genomics and computational biology. Or where there is plenty of data but is shackled by traditional hypotheses based methods, such as macroeconomics. An additional problem that NRE solves is where the problem space is simply to complex for human minds to compute — both the examples above are instances of this complexity. You may know that Radical Empiricism is by itself a construct from the late 19th century by William James — which eschews intuition and insists on physical experiences and concrete evidence to support cause and effect. Its worth noting that there are plenty of examples of environments where quantifiable data is not yet abundant, where experts still follow the traditional method driven by hypotheses. VC investing, ironically, is such an area!
  2. This also led to a discussion on Deeptech led by Azeem Azhar of Exponential View and panelists from Lux, Kindred Capital and Episode1 Ventures. Deeptech is defined from an investment perspective as companies and start ups who are building products which involve technical risk. Not using existing tech to solve new problems. Usually involving products and ideas which a few years ago would have to subsist on research grants and be housed by academic institutions.
  3. Jurgen Schmidhuber’s session on LSTM was another highlight. Schmidhuber’s PhD thesis on LSTM (Long Short Term Memory), in 1997 was a foundation of the AI advancement which was used by a number of technology products and subsequent development. Schmidhuber presented an excellent timeline of the evolution of AI in the past 20 years and ended with a long view where he explored the role of AI and ML in helping us reach resources that were not on earth but scattered across the solar system, the galaxy and beyond. And how we might perceive today’s technology and advancement in a few thousand years.
  4. One of Schmidhuber’s other points was around curiosity driven learning. Mimicking the way an infant learns, by exploring his or her universe. This is the idea that a machine can learn through observation and curiosity, about it’s environments.
  5. Joshua Gans, the author of Prediction Machines, and professor of Economics and Tech Innovation, talked about AI doing to prediction what computers did to arithmetic. Essentially they dramatically reduced the cost of complex arithmetical operations. AI does the same for prediction or inference. Which is essentially making deductions about the unknown based on the known. And bringing down the cost of prediction has a massive impact on decision making because that’s what we’re doing 80% of the time, at work, as managers.
  6. Moya Green, the CEO of Royal Mail talked about the transformation that Royal Mail went through — including an increase in technology team size from 60 to over 550 people. She also made the comment that most managers still under-appreciate the value of tech, and overestimate their organisations capability to change, and absorb new tech.
  7. Deep Nishar of Softbank used an excellent illustrative example of how AI is being used to provide personalised cover art for albums by digital streaming and media providers, based on users choices and preferences.
  8. Jim Mellon, long time investor and current proselytiser of life-extending tech suggested that Genomics would be a bigger breakthrough than semiconductors. He was joined by the chief data officer for Zymergen, which works on bio-manufactured products, based on platforms which work with microbial and genetic information.
  9. A very good data ethics panel pondered the appropriate metaphors for data. We’ve all heard the phrase data is the new oil. Yet that may be an inadequate descriptor. Experts on the panel posited metaphors such as ‘hazardous material’, ‘environment’, ’social good’ etc. because each of these definitions are useful in understanding how we should treat data. Traditional property based definitions are limited and it was mentioned that US history has plenty of examples of trying to correct social injustice via the property route (reservations for native Americans), which have not worked out. Hence we need these alternative metaphors. For example, the after-effects of data use is often misunderstood, and sometimes it needs to be quarantined or even destroyed, like hazardous material, according to Ravi Naik of ITN Solicitors.
  10. Michael Veale of the UCL suggested that ancient Greeks used to make engineers sleep under the bridges they built. This principle of responsibility for data products needs to be adopted for some of the complex products being built today by data engineers. Data use is very hard to control today, so rather than try and control it’s capture and exploitation, the focus perhaps should be on accountability and responsibility.
  11. Stephanie Hare made the excellent point that biometric data can’t be reset. You can reset your password or change your email, phone number, or even get a completely new ID. But you can’t get new biometrics (yet). This apparent permanence of of biometrics should give us pause to think even harder about how we collect and use it for identification, for example in the Aadhaar cards in India.
  12. Because of the inherently global flows of data and the internet, the environmental model is a good metaphor as well. Data is a shared resource. The lines of ownership are not always clear. Who owns the data generated by you driving a hired car on a work trip? You? Your employer? The car company? The transport system? Clearly a more collective approach is needed and much like social goods, such as the environment, these models need to validate the shared ownership of data and it’s joint stewardship by all players in the ecosystem.
  13. Stephanie Hare, who is French Historian by education provided the chilling example of how the original use vs ultimate use of data can have disastrous consequences. France had a very sophisticated census system and for reasons to do with it’s muslim immigrants from North Africa captured the religion of census correspondents. Yet, this information was used to round up all the jewish population and hand them over to the Nazis because that’s what the regime at the time felt justified in doing.
  14. On a much more current and hopeful note, I saw some great presentations by companies like Mapillary and SenSat, and Teralytics which focus on mapping cities with new cognitive tools. Especially for cities which are of less interest to tech giants, and using crowdsourced information and data, which may include mobile phone and wifi usage, or street level photographs all used with permission, for example.
  15. At a broader level, the smart cities discussions, strongly represented by London (Theo Blackwell) and TFL (Lauren Sager Weinstein) shows the transition from connected to smart is an important one. Very good examples by TFL on using permission based wifi tracking at platforms to give Line Managers for each of the tube lines much more sophisticated data on the movement of people, to make decisions about trains, schedules and crowd management, over and above the traditional ways which include CCTV footage or human observation on platforms.
  16. At a policy level, a point made by Rajiv Misra, CEO of Softbank Investment Advisors (aka the Vision Fund) is that while Europe leads in a lot of the academic and scientific work being done in AI, it lags in the commercial value derived by AI, notably to China and the US. A point echoed by the House of Lords report on AI which talks about the investments and commitment needed to sustain the lead the UK enjoys in AI, currently. Schmidhuber’s very specific solution was to mimic the Chinese model — i.e. identify a city and create an investment fund of $2bn to put into AI.
  17. I also sat through a few sessions on Chatbots and my takeaway is that chatbots are largely very much in the world of hype machines. There is very little ‘intelligence’ that it currently delivers. Most platforms rely on capturing all possible utterances and coding them into the responses. Even NLP is still at a very basic stage. This makes chatbots basically a design innovation — where instead of finding information yourself, you have a ‘single window’ through which to request all sorts of information. Perhaps its a good thing that the design challenges are getting fixed early, so that when intelligence does arrive, we won’t stumble around trying to design it.
  18. Within the current bot landscape, one useful model that I heard is ‘Treat a bot like a new intern that doesn’t know much’ and let it have a similar personality so that it provides responses that are appropriate and also sets expectations accordingly. It might just start with a ‘hello, I’m new so bear with me if I don’t have all the answers’, for example.
  19. Dr Julia Short, who has built Spot — a chatbot to handle workplace harassment provided a very interesting insight about the style of questions such a bot might ask. A police person’s questions on the one hand are all about capturing in detail exactly what happened and making sure that the respondent is clear and lucid about events, incidents, and the detail. A therapists questions and line of discussion on the other hand is all about helping a victim get over some the details and get on with their lives. This suggests that you need to be clear whether your bot is an extension of the law enforcement or a counselling body. It also suggests that you might want to do the former before the latter.
  20. A really important question that will not leave us is: what do we do if the data is biased? If we are conscious of certain biases which are to do with gender, race or age, then we can guard against them either at the data level or at the algorithmic level, but we also need to be able to detect biases. For example, the example which I’ve now read in a few places of how the leniency of sentences handed out by judges in juvenile courts in the US vary inversely with the time since the last meal of the judge.

Clearly all of this really represents under 20% of the great discussions over the 2 days. Please do add your own comments, takeaways and thoughts.

Hail Mary (Meeker)!

Hail Mary (Meeker)

Internet

 

On the 29th of May, Mary Meeker released her annual compendium of the digital state of the world – the KPCB Internet Trends. For those who may not remember, Mary Meeker was a veteran who survived the dot.com crash and also the financial crisis of 2008, as the head of tech research for Morgan Stanley. She was named as among the 10 smartest people in Tech. She now serves as a partner at KPCB (Kleiner Perkins Caufield and Byers) and has been publishing her annual opus for a few years now.

The problem is that when you’re Mary Meeker, you can get away by putting out a deck with 294 slides. For us mere mortals, reading and absorbing this encyclopaedia of information is a challenge by itself. Every year I get this and carefully save the deck to read in detail and of course, it never happens. So this year, with the benefit of a relatively free weekend, I thought I would do a first pass and pull out some of the most interesting things that I found in the report. So here are my top 10 interesting things to take away from the Mary Meeker report – some of them confirm what we know, while others are what we didn’t know, or are truly counter-intuitive.

What I knew or suspected.

1. The devices story mobile device shipments growth has shrunk to zero. This confirms what we’ve known for a while – device evolution has stalled since Steve Jobs. And since Samsung, the largest manufacturer has a ‘follow Apple’ strategy. Will we see a new device redefine growth or will the we see a decline in shipment numbers next year? HMD – are you watching? (Slide 6)

2. The decline in desktop use despite overall growth. While mobile internet growth is expected, it’s the ‘other devices’ that is interesting. This will presumably include netbooks, etc. but also smart things. I expect in future this category will be broken out to reflect the detail on Internet of Things. (Slide 11)

3. The privacy paradox will be one to watch – after all data is how every single provider improves their services, while keeping prices low, which leads to user spending more time and sharing more data. Versus the regulators needs to protect consumers and protect data use. This will be a key axis of debate going forward and will determine the balance between innovation and protection. Unfortunately Meeker’s slides don’t carry too much insight on this by way of data. (Slides 31-36)

What I didn’t know (I’m intentionally using the singular, as you may well be aware of this)

1. While we’re aware that big tech now dominates the market cap list, what should worry the rest of the pack is how they dominate the R&D spending list, which points to a continuation of their dominance at the top. The top 15 R&D investors list is dominated by 6 technology firms, with 2 each from automotive, petroleum, telcos, Pharma), with GE as the only conglomerate. The top 5 in the list are Amazon, Alphabet, Intel, Apple, and Microsoft. Also, tech firms report the highest growth in R&D, with 9% CAGR and 18% YoY growth. (Slides 40-41)

2. We know that image recognition is an area where AI has now passed the human levels of accuracy leading to all kind of applications across scan analysis in healthcare, and more controversial applications such as face recognition. Now, voice-based natural language recognition is another areas as also demonstrated recently by Google. This should drive a revolution in customer contact centres and in human-computer interfaces in general. (Slide 25)

3. The extent to whichAmazon & Google are getting to dominate the enterprise AI race. To be honest, we know instinctively that the AI race will be one by players with the largest data stockpile. But the range of services being offered for enterprise customers is still an eye-opener. We’ve just started playing around with Google’s Dialogflow, but they also have Tensor (cloud-based H/w), the recently announced AutoML (machine learning), and Vision API (Image recognition), while Amazon has AWS based tools such as Rekognition (image recognition), Comprehend (NLP), Sagemaker (ML framework), and of course their AWS GPU clusters. (Slide 198 – 200)

4. The growth of Fortnight and Twitch on the gaming front – pushes forward what we saw with Pokemon Go. The sweet spot between the hardcore platform based gamers and the casual gamers and kids where millions of people get just a little bit more involved about game, that does not need a special platform – is the story behind Fortnight (Slide 24)

What I didn’t expect

1. The highest increase in spending in enterprise IT is in networking equipment. This is a surprise. I haven’t found the data on this yet, and while the 2nd and 3rd place results don’t surprise me, (AI and hyper-converged infrastructure), my curiosity is definitely piqued by why companies are spending more on networking equipment – connecting to cloud environments from the enterprise perhaps? More connected devices and environments?

2. I’m seeing a lot more confirmation of the models of lifelong learning. This is repeated by Meeker, but her really interesting insight is around how much more learning freelancers invest in compared to their presumably complacent employee counterparts. Perhaps unsurprisingly the top courses sought include AI & related subjects, cryptocurrency, maths and English. (Slides 236 and 233)

3. Meeker makes a great point but Slack and dropbox and I wouldn’t have picked these 2 companies as the flagbearers of consumer-grade technology in the enterprise. But clearly, they are among the most penetrated consumer style tools in the corporate environment. (Slides 264-268)

Meeker has a big section on the Job market, on-demand jobs and future jobs. She also makes the same point others have made about how all technologies so far has created net new jobs. While I agree with this backlog, history is not always the best predictor of the future. And the fact that there will be net new jobs tends to gloss over the significant short-term and geographical disruption in livelihoods that is likely to occur. Think Detroit or Sheffield. There may be more automotive and steel manufacturing jobs today than in 1980 but they are in China, not in Detroit or Sheffield. And so of not much solace to the unemployed factory worker and his / her family in these towns. This may well be the story of AI – but potentially at a larger scale and possibly in a shorter time frame. (See slides 147-163).

There are also useful slides on the gig economy and on-demand jobs now being a scaled phenomenon. (Slides 164-175)

There are also entire sections on China, Immigration and Advertising – which I’ve not delved into as they are currently of less interest to me personally. The E-commerce section also didn’t have anything that jumped out at me as noteworthy. Happy to be corrected!

What Is The Future of User Experience?

future-crystal-ball

In the beginning was the green screen. Then a chap called Douglas Engelbart said ‘let there be point and click’, and there was GUI, later to be built at Xerox, PARC and then brought to the world by the Prometheus — Steve Jobs. And much later, thanks to Windows, the earliest memory most people have of a computer. And Windows grew fatter and hungrier, swallowing all the computing power Intel could throw at it, while Apple became the tool of the technology artisan. But it still felt like anything that could be imagined, could be brought to life on a desktop computer.

Then came the Internet and we were back in a world of limited resources. 256 colours, low bandwidth and limited information delivery. And we tried to replicate the computer screen on the web and failed, largely. Then Jakob Nielsen showed us a path and web usability became a thing. Nielsen said things like ‘people don’t read, they scan’. And so side bars and highlights were born. And we debated whether scrolling was better than clicking, and whether people would read things that users had to scroll to find.

Around the year 2000, I co-authored articles with Karuna around information design. This was a great challenge of website user experience — how to layer information, and organise it so that anything could be found within 3 clicks as Nielsen suggested. Which was easy if you were a small business but much harder if you were a conglomerate. We also ran workshops where a simple exercise highlighted this challenge effectively. We would send half the participants into the next room and ask them to list the 10 things they wanted to put on the home page of a hypothetical Pizza restaurant they were going to open. And we would ask the other half in the room to think as customers and list the information they would like to see on the website home page. Invariably, the Pizza restaurant guys would want to put in things like the quality of their pizza, and the ingredients, or (yes, really!) their mission statement or brand tagline. And just as invariably the customer group would want to know opening timings, whether they delivered home, and what was their phone number.

This struggle between what we want to tell users, and what users want to know has been at the heart of user experience challenges even till today. And service design methods have taught us to delve deeper into what users are trying to do. And we know now that we need to think of the end-to-end experience of the user, not just what’s on the screen. We’ve been through the social networking revolution which has made us comfortable with the ‘stream of updates’ style of of interface, and where sharing is an expected option to any content. We’ve also had our lives changed by smart phones — one handed computers which took us from point and click to press, pinch, zoom, and wave. Where we learnt to make all information contextual and even more byte sized. And we learnt that tablets are subtly different from phones — for example, people are more often than not sitting down when they use the iPad, and usually not multitasking, except with watching television.

Through this journey, a couple of interesting things have happened from my perspective. If you hark back to creative teams in design agencies, you would expect to see a writer and designer (a copy writer / art director) working in tandem. This became a triumvirate on the web with a developer joining the party. But then over the years, it feels like 3 was too much of a crowd and now most teams tend to have a designer and a developer, and the role of the writer has diminished greatly. I think this is a loss as successful design and communication still requires a calibre of wordcrafting and messaging that can’t be ignored. But it does feel to me that writers have become a rare bread in digital design. Unless you look at game creators or specialist communications teams.

Consequently, the pendulum has swung from traditional creative agencies, to design studios and now to digital agencies who bring technical smarts to the table along with design skills. But even here, writing is a lost art, perhaps reflecting the extreme evolution of Nielsen’s thinking, where people don’t really read copy at all, and the initial 3 second impression is where the battle is lost and won. David Ogilvy would weep.

The other interesting pendulum swing has been to the browser and back. The browser inured us to the spartan environments of the early world wide web even as it immunised us to everything around it. We didn’t need to care whether the operating system was a Mac or PC, or whether it was made by Dell or Compaq. Or indeed whether we were running on a linux or windows OS. This spoilt us because we learnt to focus completely within the browser window. I’m guessing you’ve never had a discussion in the web-design world about form factor or the size of the screen. The only debates were around browser versions and which features of HTML were supported. I once had a client representative tell me that they believed in ‘vaastu’ (which is like Feng Shui). I suggested to him we should put in a line saying ’this website is best viewed facing east’. He didn’t think it was funny. Before Netscape and Mozilla changed the world, we would certainly have had to worry about the world beyond the browser and since the iPhone, we not only have to worry about the device, the OS, the manufacturer, and the screen size and shape, but we also need to think about the user’s context. Is she likely to be bending under an industrial machine trying to read a part label? Or is she likely to be pondering whether to pop into her favourite store as she walks down the high street? Context is everything now.

Today we are in the early stages of another new world. With 3 big changes. The first and arguably the most compelling is the move towards ‘natural language interfaces’. As many people have articulated — this is the first time we’re teaching technology to interact with people rather than teaching people to interact with technology. Through chatbots and voice based interfaces, we are now able to interact naturally with our systems. While the technology evolution that has made this possible, via NLP is truly fascinating, it poses entirely new and unsolved design challenges. How do you design a conversation? How can we help a user complete a complex activity through a natural interface? In what forms should we combine this new interface with the old? What are it’s limitations? You probably know by now that Alexa’s initial success was stymied by its inability to hold context, so every question is a restart of the conversation — a very 50 first dates model of interaction. This is not natural, it’s like talking to a machine with severe amnesia. Google has pulled ahead in this regard and Alexa is now catching up. (Siri is far behind.) But whether you are interacting via voice, or via web chat, the entirely new challenge is one of designing conversations. And personalities. Bots are boring if they’re just functional — they need to have quirks, humour and be able to engage in small talk.

There are any number of articles on how to design a chatbot — the Chatbot Magazinepublishes a few every week. Most of them are useful in some way as they are written by practitioners. But to me the great joy in this is to witness the triumphant return of the writer into the design process. Or indeed, to the heart of the design process. Language is important once again, words matter, dialogue drives engagement.

But a second feature of this next wave also the embedding of intelligence into the design process. We are not just designing an experience, an interaction or even a conversation. We are in fact often designing intelligence. An effective chatbot is intelligent in at least 3 ways. The first is its ability to understand natural language — to tell the similarity between I forgot and I can’t remember. Or the harder distinction between I have a little knowledge and I have little knowledge. The second is it’s ability to understand conceptual interlinkages for a given context. For example to know that the word ‘drain’ when applied in the context of a phone is connect to energy and charge, but in the context of bathroom repair is connected to plumbing and pipes. Which will shape the questions you ask the user to clarify the problem. And the third, is its ability to find answers and solve the problem being posed. To help a customer looking for support for a mobile phone problem or a bathroom fittings problem (a store like John Lewis can have both), by connecting them to the right pieces of information or experts. These are non-trivial problems and designing intelligence is a brand new activity. Often we are designing truly pseudo-intelligence — to make it appear smart. But over time it will actually become smart. And human beings’ ability to interact with bots and machines is evolving alongside this. We don’t yet have a Jakob Nielsen moment for this but somebody at some point will create the heuristic models of designing for intelligent interfaces. Many are trying already.

The third big change in user experience is that screens are vanishing. Whether it is an Alexa like voice interface, or it’s an IOT solution with a smart object, such as the Nest Thermostat. It’s probably a bit sensationalist to say screens are vanishing, and more accurate to say that the interaction with a service is now spread across devices, with the screen being only one part of it, but at least I have your attention now. If you install a Ring doorbell, you may still access it via a screen, or you might control a Philips Hue light through your screen but you might also control it through Alexa and the primary experience of the light does not have to do with the screen. Industrial design is not new. Neither is interface design. But we’re seeking now to marry the two and create smart products with which we can engage without a screen. A variation of this is in areas such as augmented reality where we are blending a screen experience with a real world one. These are still by and large clunky, and mostly used as gimmicks, rather than adding true value.

These are the immediately visible changes based on what we can see coming. What will happen when video manipulation becomes easy on a user’s desktop or phone, or when AI tools become commonplace? Can AI design interfaces? There was the famous example of Google testing 41 shades of blue to determine which which was getting the best response. That kind of process can already be automated. But as technology opens up new frontiers — embedded chips in our bodiesbrain computer interfaces and quantum computing, which are just some of the examples that we can look forward to to today, entirely new worlds of user experience will open up. To quote Jack Ma, who used a mild variation of an earlier sentence, after a hundred years of training humans to behave like machines, we are training machines to behave like humans.

What will change and what will stay the same? Dieter Ram’s principles will probably be applicable in their essence, as will the principles of service design. Steve Job’s quote — design isn’t how it looks, but what it does, will almost certainly be true going forward. But the specific challenges will morph, the number of parameters that we need to consider while designing an experience will vary and largely expand, the complexity of challenges will grow, the technological advancements will continue to both solve problems and create new ones, and demographic, environmental and social changes will bring us entirely new problems to solve. The future of user experience may well lie in green and nourishing products/ services that help us age gracefully and stay connected with our friends, family and things.

It’s fair to say though that there are more questions than answers. What is the speed and direction of travel? How do we rebalance skills? How in practical terms do we build and deliver experiences working with technology? What does designing intelligence look like? Will writers ever be the Jedi knights of design? What do we think great user experience will look like in 5 years time? These are just a few of the yet unanswered questions in the world of user experience and this is why this is still worth debating.

Seven in 7: Amazon’s Infinite Monkey Theorem Defence, GDPR Impact on Innovation, Ocado’s Successful Transformation, and More…

Seven for 7: Alexa sends the wrong message; does GDPR take us backwards? Uber crash – design flaw; future gazing with Michio Kaku; AI Winners; Ocado transformation and Energy Industry Updates.

(1) Amazon Echo: message in a bottle

The technology story of the week is undoubtedly the one about Amazon Echo and the message it inadvertently sent. ICMYI, a couple in Oregon had a call from an acquaintance to say that Alexa had sent them a recording of a private conversation of the couple, without their permission, or even knowledge. Amazon’s explanation is that this is the rare combination scenarios where a normal conversation between the couple somehow triggered all the keywords and responses that made Alexa record, validate and send the conversation to the acquaintance. This feels like the equivalent of the money, typewriter and Shakespeare problem, only, it’s not an infinite amount of time.

Here’s Amazon’s explanation: https://www.recode.net/2018/5/24/17391480/amazon-alexa-woman-secret-recording-echo-explanation

(2) GDPR  – impact on marketing and innovation.

I’m sure you’ve all received hundreds of emails in the past week exhorting you to stay in touch and re-sign up for all the emails you’ve been getting from people you didn’t know were sending you emails. But now that the moment has come, how will marketing work in a GDPR world? In one way this will take marketing backwards – as there is now a ban on algorithmic decision making based on behavioural data. It’s a moot point whether advertising falls into this category but companies may want to play it safe and in any case, the confusion will create a speed breaker in the short term. We may now be back in the world where if you’re watching or reading about champions league football you will see a beer ad irrespective of who you are. Not just marketing – a lot of innovation will also come under fire – both because of safety first practices, but also because some organisations will use GDPR as a shield for enabling innovation stifling practices, as highlighted by John Battelle of NewCo Shift. He argues that the regulation favours ‘at scale first parties’ – large tech platforms that provide you with a direct service such as Netflix, Facebook, or Uber – where users are likely to still give consent for data use more readily than to smaller, upcoming or relatively new and unproven services.

Dipayan Ghosh in the HBR – GDPR & advertising: https://hbr.org/2018/05/how-gdpr-will-transform-digital-marketing

John Battelle on GDPR & Innovation: https://shift.newco.co/how-gdpr-kills-the-innovation-economy-844570b70a7a

(3) Driverless / Uber/ Analysis

The analysis of Uber’s recent driverless crash has now thrown some light on what went wrong. And the answers aren’t great for Uber. In a nutshell, the problem is design and not malfunction. Which means that all the components did exactly what they were designed to do. Nothing performed differently and no components were at fault for failing to do their job. But as a collective, the design itself was flawed. The car had 6 seconds and 378 feet of distance to do something about the pedestrian crossing the street with her cycle. But it was confused about what the object was. The human in the car only engaged the steering 1 second before the crash and started breaking 1 second after the collision. The car was not designed to warn the human driver about any possible threats. A lot of the inbuilt safety systems in the Volvo vehicle including tracking driver alertness, emergency braking and collision avoidance, were disabled in the autonomous mode. In a nutshell, the responsibility lies with Uber’s design of autonomous cars. Uber has stopped testing in Arizona but has now started exploring flying taxis. Not a project that might fill you with confidence!

Uber crash analysis: https://sf.curbed.com/2018/5/25/17395196/uber-report-preliminary-arizona-crash-fatal

(4) A glimpse of the future: Michio Kaku & Jack Ma

The robotics industry will take over the automobile industry. Your car will become a robot – you will argue with it. Then your brain will also be robotised and brain net will allow emotions and feelings to be uploaded. You will be able to mentally communicate with things around you. Biotech allows us to create skin, bone, cartilage and organs. Alcoholics may be able to replace their livers with artificial ones. You may be able to scan store goods with a contact lens and see the profit margin on goods. The first 7 mins of this video tells you all of this through the eyes and experience of futurist Michio Kaku. Jack Ma (14 mins in) also talks about trusting the next generation. And how we are transitioning from the industrial era where we made people behave like machines, to a world where we are making machines behave like people. Believe the future before you see it, to be a leader, according to Ma.

https://www.youtube.com/watch?v=K1EZWYqm-5E&feature=youtu.be

(5) Who’s Winning The AI Game?

With the whole world hurtling towards an AI future, this piece looks at who exactly wins the AI game – across 7 different layers. It won’t surprise you to know that China is making amazing gains as a nation – their face recognition can pick out a wanted man in a crowd of 50,000. But it might surprise you to note that Nvidia’s stock is up 1500% in the past 2 years on the back of the success of their GPU chips. Meanwhile, Google is giving away Tensorflow free. All this points to a $3.9 tn market for enterprise AI in 2022. Are you ready for the challenge?

Who wins AI, across 7 layers
https://towardsdatascience.com/who-is-going-to-make-money-in-ai-part-i-77a2f30b8cef

(6) Ocado – digitally transformed.

When Ocado launched in 2000, it was on the heels of Webvan, a category of providers who set up to focus on eCommerce fulfilment, as an arm of Waitrose. Cut to 2018, and Ocado is a story of successful digital transformation. Ocado is today a provider of robotic technology for warehouse automation. Having become profitable in 2014, it now has a valuation of $5.3 bn and is set to become a part of the FTSE 100.

https://www.ajbell.co.uk/news/index-reshuffle-looks-set-deliver-ocado-ftse-100
https://www.theguardian.com/business/2018/may/17/us-deal-boosts-ocados-stock-market-value-above-5bn

(7) Understanding statistics: What medical research reports miss

When a drug is tested and the outcome suggests a 5% chance of a possible side effect, this does not mean that you have a 5% chance of being impacted if the drug is administered to you. It means there is a 5% chance that you will have the condition which leads to you having a 100% likelihood of being impacted. This is a subtle but very important distinction in how we interpret the data. But continuing down this stream of thought, it points to the lack of personalisation of medicine, not just the misinterpretation of data.
https://medium.com/@BlakeGossard/the-underrepresentation-of-you-in-medical-research-85289b591ba

 

Regulating Digital Businesses – Like Chasing Trains

chasing train

I don’t know if you’ve ever had the experience of running for a train that’s just started to move? I’ve had to do it a few times. Yes, I was younger and more foolish then. But it was usually within seconds of the train moving that I was on it. It’s only in old movies that you see the protagonists dashing down the platform as the train picks up speed. Usually, you just have the platform length and the problem is that the train is accelerating. There is a finite window of opportunity after which you’re just going to be left on the platform. This is my very long-winded analogy for regulators and technology. As technology accelerates – it’s getting harder for regulators to keep pace and in fact, in many areas they are just like the proverbial train chasers, running desperately after an accelerating train – often in a futile bid to control a business or industry that is on the verge of leaving the station of regulatory comfort. You can pick from a range of visual metaphors – a man trying to control seven unruly horses, or grabbing a tiger by the tail, but you get the idea. Regulators are in a fix.

The sight (and sounds) of the congressional hearing of Mark Zuckerberg did not bode well for regulators. They should have had Zuckerberg dead to rights over it’s (willing or unwilling) culpability in the Cambridge Analytica imbroglio. Yet he came out with barely a scar to show for 2 days of grilling. Many of the people asking him questions came across as the stereotypical grandparent trying to figure out the internet from their grandchild, even if these are very exaggerated caricatures. There is arguably a 40 year age gap between the average lawmaker and the average entrepreneur. But the age challenge is just a minor problem. Here are some bigger ones.

Technology businesses are shape-shifting enterprises invariably redefining industries. Platforms cannot be regulated like their industrial counterparts. Uber is not a taxi company. Facebook is not a media business. Airbnb is not a hotel. No matter how convenient it might be to classify and govern, or how often someone points out that the world’s biggest taxi company doesn’t have taxis. No, these are data and services platforms, and they need an entirely new definition. You could argue that the trouble with Facebook has come about because they were being treated like a media organisation, rather than a data platform. And let’s not forget that the only reason Facebook was in the dock is because of the success of Cambridge Analytica in actually influencing an election. Not for the misuse of customer data on a daily basis which may have gone on for months and years by Cambridge Analytica as well as other similar firms. While governments’ focus on Uber stems largely from incumbent and licensed taxi services, nobody seems to be worried that Uber knows the names, credit card details and the home and office residences of a majority of its users.

Tech businesses, even startups, are globally amorphous from a very early age. Even a 20 person startup barely out of its garage can be founded in California, have it’s key customers in Britain, its servers in Russia, its developers in Estonia and pay taxes in Ireland. Laws and governments are intrinsically country bound and struggle to keep up with this spread of jurisdiction. Just think of the number of torrent services that have survived by being beyond the reach of regulation.

These are known problems and have existed for a while. Here’s the next challenge which is a more fundamental and even an existential one for lawmakers. With the emergence of machine learning and AI, the speed of technology change is increasing. Metaphorically speaking, the train is about to leave the station. If regulators struggle with the speed and agility of technology companies today, imagine their challenge in dealing with the fast-evolving and non-determinate outcomes engendered by AI! And as technology accelerates, so do business models, and this impacts people, taxes, assets, and infrastructure. Imagine that a gig-economy firm that delivers food home builds an AI engine that routes its drivers and finds a routing mechanism that is faster but established as being riskier for the driver. Is there a framework under which this company would make this decision? How transparent would it need to be about the guidance it provides to its algorithms?

I read somewhere this wonderful and pithy expression for the challenge of regulation. A law is made only when it’s being broken. You make a law to officially outlaw a specific act or behaviour. Therefore the law can only follow the behaviour. Moreover, for most countries with a democratic process, a new law involves initial discussion with the public and with experts, crafting of terms, due debate across a number of forums and ultimately a voting process. This means we’re talking in months, not days and weeks. And if technology is to be effectively regulated and governed, a key challenge to address is the speed of law-making. Is it possible to create an ‘agile’ regulatory process? How much of the delay in regulation is because the key people are also involved with hundreds of other discussions. Would lawmaking work if a small group of people was tasked to focus on just one area and be empowered to move the process faster in an ‘agile’ manner? We are not talking about bypassing democratic processes, just moving through the steps as quickly as possible. A number of options are outlined in this piece from Nesta website – including anticipatory regulation (in direct contravention of the starting point of this paragraph), or iterative rather than definitive regulation. All of these have unintended consequences so we need to tread cautiously. But as with most businesses, continuing as present is not an option.

Then there’s the data challenge. The big technology platforms have endless access to data which allows them to analyse them and make smarter decisions. Why isn’t the same true of regulators and governments? What would true data-driven regulation look like? We currently have a commitment to evidence-driven policymaking in the UK (which has sometimes been unkindly called policy driven evidence making!) but it involves a manual hunt for supporting or contradicting data, which is again time-consuming. What if a government could analyse data at the speed of Facebook, and then present that to the experts, the public, and legislators in a transparent manner? The airline industry shares all the data about every incident, accident and near miss, across its ecosystem, competitors, and regulators, and this is a significant contributor to overall airline safety. (Outlined in the book Black Box Thinking, by Matthew Syed.) Why isn’t the same true for cybersecurity? Why isn’t there a common repository for all the significant cyber attacks, which can be accessed by regulators armed with data science tools and skills, so that they can spot trends, model the impact of initiatives and move faster to counter cyber attacks? If attacks seem to originate from a specific territory or impact a specific vulnerability of a product, pressure can be brought to bear on the relevant authorities to address those.

running after train
These are non-trivial challenges and we need to be aware of risks and unintended consequences. But there is no doubt that the time has come for us to think of regulation that can keep pace with the accelerating pace of change, or governments and regulators will start to feel like the protagonists of movies where people run after trains.

Seven in 7 – Agile @ Scale, Maturing AI, The Ring of Success, Defending Democracy and More…

Agile at scale
As we head into the TCS UK Innovation Forum this week, I’m preparing myself to discuss big ideas and disruptive changes. With that in mind, this week’s Seven in 7 looks at scaling AI, a startup that was bought for a billion dollars, and hacking democracy. But also, as we’re committed to becoming agile as an organisation, where better to start, than at the great article in the new HBR about how to drive agile at scale!

(1) Doing Agile at Scale

This is a very timely look at Agile adoption at scale in the enterprise. It starts with enshrining Agile values in leadership roles, which requires a continuous approach to strategy. The next key thing is a clear taxonomy of initiatives which may be classified into 3 categories: customer experiences, business processes, and IT systems. The next step is sequencing the initiatives, with a clear understanding of timelines. It can take 5-7 years for real business impact, but there should be immediate customer value. Enterprise systems such as SAP can be delivered using agile as well. But it needs the organisation to create and move with a common rhythm. There are businesses working in agile who use hundreds of teams, solve large problems, and build sophisticated products. This can be made easier with modular products and operating architectures – which essentially mean the plug and play capabilities of individual components. It’s important to have shared priorities and financial empowerment of teams. Talent acquisition management needs to be reshaped to meet the new needs. And funding of projects and initiatives needs to be seen as ‘options for further discovery’. After all, at the heart of agile is the ability to proceed with a clear vision but without necessarily knowing all the steps to get there.

(2) Artificial Intelligence At Scale – for non-technology firms

It’s clear that Tech firms from Google, to Amazon and Twitter, have all been able to deploy AI at scale – in enabling recommendations, analysis and predictive behaviour. For non-tech firms too, the time may have come for delivering scaled AI. One of the key areas where AI seems to be ready to scale is around computer vision (image and video analysis) – relevant to insurance, security, or agribusinesses. The article below from the Economist also quotes TCS’s Gautam Shroff, who runs the NADIA chatbot project. A critical assertion the article makes is that implementing AI is not the same as installing a Microsoft program. This might be obvious, but what is less so, is that AI programs by design get better with age, and may be quite rudimentary at launch. Businesses looking to implement AI may need to play across multiple time horizons. And while the short-term opportunity and temptation is to focus on costs, there role of AI in creating new value is clearly much bigger.


(3) The Ring of Success:

What makes a new product successful? I met Jamie, the founder of Ring a couple of years ago in London and was struck by his directness and commitment. He even appears in his company’s ads. Ring.com was recently acquired by Amazon for $1bn. Here one of the backers of Ring talks about the factors which made Ring a success. In a nutshell, the list includes (1) the qualities of the founder, (2) execution focus and excellence, (3) continuous improvements, (4) having a single purpose, (5) pricing and customer value. (6) integration of hardware and software. (7) clarity about the role of the brand.


(4) Blockchain and ICO redux:

Do you know your Ethereum from your Eos or your MIATA from your Monero? This piece from the MIT Tech Review will sort you out. And for those of you who are still struggling to understand what exactly blockchain is, here’s a good primer. Of course, you could always go look at my earlier blog post on everything blockchain.

Links:


(5) X and Z – The Millennial Sandwich

X & Z: Or the millennial sandwich. All the talk in the digital revolves around millennial, but there is a generation on either side. The generation X – followed the baby boomers, and it turns out they have a better handle on traditional leadership values than millennial. This article talks about Generation X at work.

On the other side, there’s a generation after the millennials – the generation Z. They’re the ones who don’t have TV’s, don’t do facebook, and live their lives on mobile phones. This article talks about how Financial services are being shaped by Gen Z.


(6) Big tech validates Industry 4.0

This week, the large tech players disclosed significant earnings, beating expectations and seeing share prices surge. In a way it’s a validation of the industry 4.0 model – the abundance of capital, data, and infrastructure will enable businesses to create exponential value, despite the challenges of regulation, data stewardship issues and other problems.  Amazon still has headroom because when push comes to shove, Amazon Prime, which includes all you can consume music and movies can probably increase prices still more.


(7) Defending Democracy

The US elections meets the technology arms race – this article presents experiences from a hacking bootcamp., run for the teams who manage elections. While the details are interesting, there is a larger story here – more than influencing the elections either way, the greater harm this kind of election hacking wreaks is in its ability to shake people’s faith in democracy. As always, there’s no other answer than being prepared, but that’s easier said than done!