So You Think The Brain is Better Than The Computer?

brain-computer

Every discussion on the power of computers is bracketed by the comparison to the human brain and the dwarfing of any known computer by the fantastical power of the human brain. Estimates by Ray Kurzweil suggested a calculations per second (cps) capability of 10 16 or 10 quadrillion cps. And it runs on 20 watts of ‘power’. By comparison (according to this excellent article that everybody should read) the worlds best computer today can do 34 quadrillion cps but it occupies 720 sq meters of space, costs $390m to build and requires 24 megawatts of power.

Besides, that’s just the ‘hardware’ so to say. The brain’s sophistication is far, far ahead of the computers, considering all the miraculous things it can do. We know now that the biggest evolution of the human brain was the growth of the prefrontal cortex, which required a rethink of the interior design of the skull. Also, a key facet of the brain is that it is a neural network – capable of massively parallel processing – simultaneously collecting and processing huge amounts of disparate data. I’m tapping away on a laptop savouring the smell and taste of coffee while listening to music on a cold cloudy day in a warm cafe surrounded by art. The brain is simultaneously assimilating the olfactory, visual, aural, haptic and environmental signals, without missing a beat.

It would appear therefore that we are decades away from computers which can replace brain functions and therefore, jobs. Let’s look at this a little more closely though.

The same article by Tim Urban shows in great detail how the exponential trajectory of computers and software will probably lead to affordable computers with the capacity of a human brain arriving by 2025, and more scarily, achieving the computing capacity of all humans put together by 2040. This is made possible by any number of individual developments and the collective effort of the computer science and software industry. Kevin Kelly points to 3 key accelerators, apart from the well known Moore’s law. The evolution of graphics chips which are capable of parallel processing – leading to the low cost creation of neural networks; the growth of big data, which allows these ever more capable computers to be trained; and the development of deep learning – the layered and algorithmically driven learning process which brings much efficiency to how machines learn.

So the hubris around the human brain may actually survive another decade at best and thereafter the question might not be whether computers can be as good as humans but how much better than the human brain could the computer be. But that has been well argued and no doubt will be so again, including the moral, ethical and societal challenges it will bring.

I actually want to look at the present and sound a note of warning to all those still in the camp of ‘human brain hubris’. Let me start with another compliment to the brain. Consider this discussion between two friends meeting after ages.

A: how have you been? What are you doing nowadays?

B: I’m great, I’ve been playing chess with myself for ages now.

A: Oh? How’s that? Sounds a bit boring.

B: Oh no, it’s great fun, I cheat all the time.

A: But don’t you catch yourself?

B: Nah, I’m too clever.

One of the most amazing thing about the brain is how it’s wired to constructively fool us all the time. We only ‘think’ we’re seeing the things we are. In effect, the brain is continuously short circuiting our complex processing and presenting simple answers. This is brilliantly covered by Kahneman, and many others. Because, if we had to process every single bit of information we encounter, we would never get through the day. The brain allows us to focus by filtering out complexity through a series of tricks. Peripheral vision, selective memory, and many other sophisticated tricks are at play every minute to allow to function normally. If you think about it, this is probably the brains’ greatest trick – in building and maintaining this elaborate hoax that keeps up the fine balance between normalcy and what we would call insanity. Thereby allowing us to focus sharply on specific information that needs a much higher level of active processing.

And yet, put millions of all of these wonderful brains together, and you get Donald Trump as president. You get Brexit, wars, environmental catastrophy, stupidity at an industrial scale, and a human history so chockfull of bad decisions that you wonder how we ever got to here. (And if you’re pro Trump then consider that even more people with the same incredible brain voted for Clinton). You only have to speak with half a dozen employees of large companies to collect a legion of stories about mismanagement and how the intelligence of organisations is often considerably less than the sum of the parts. I think it would be fair to say that we haven’t yet mastered the ability to put our brains together in any kind of reliably repeatable and synergistic way. Very much in trial and error mode here.

This is one of the killer reasons why computers are soon going to better than humans. In recent years, computers have been designed to network, to share, pool and exchange brain power. We moved from the original mainframe (one giant brain), to PCs (many small brains), to a truly cloud based and networked era (many, connected brains working collectively, much, much bigger than any one brain). One of the most obvious examples is blockchain. Another is in the example of the driverless car. Now, most of you might agree that as of today you would rather trust a human – (perhaps yourself) rather than a computer at the wheel of your car. And you may be right to do so. But here are two things to ponder. Your children will have to learn to drive all over again, from scratch. You might be able to give them some guidance, but realistically may be 1% of your accumulated expertise behind the wheel will transfer, from your thousands of driving hours. Also, let’s assume you hit an oil slick on the road and almost skid out of control. You may, from this experience, learn to recognise oil slicks, deal with them better, perhaps learn to avoid them or slow down. Unfortunately, only one brain will benefit from this – yours. Every single person must learn this by experience. When a driverless car has a crash today because it mistakes a sky blue truck for the sky, it also learns to make that distinction (or is made to). But importantly, this ‘upgrade’ goes to every single car using the same system or brain. So you are now the beneficiary of the accumulated learning of every car on the road, that shares this common brain.

Kevin Kelly talks about a number of different kinds of minds / brains that might ensue in the future, that are different from our own. But you can see a very visual example of this in the movie – Live Die Repeat – where the protagonists must take on an alien that lives through it’s superbrain – which is all seeing. It gets better. If, like the airline industry, automotive companies agree to share this information – following every accident or near-miss, then you start to get the benefit of every car on the road, irrespective. Can you imagine how quickly your driverless car would start to learn? Nothing we currently know or can relate to prepares us for this exponential model of learning and improvement.

It’s not just the collective, though. The super-computer that is the brain, fails us in a number of ways. Remember that the wondrous brain is fantastic as the basic hardware and wiring, and possibly, if you will allow me to extend the analogy, the operating system. Thereafter, it is the quality of your learning, upkeep and performance management that takes over, and this where we as humans start to stumble. Here are half a dozen ways in which we already lag behind computers:

Computation: This is the first and the most obvious. Our computational abilities are already infinitesimally small compared to the average computer. This should require no great elaboration. But when you apply it to say, calculating the speed of braking to ensure you stop before you hit the car that’s just popped out in front, but not so fast that you risk being hit by the car behind you, you’re already no match for the computer. Jobs that computers have taken over on the basis of computation include programmatic advertisement buying, and algorithmic trading. Another type of computation involves pattern recognition – for example checking scans for known problems, as doctors do.

Observation: Would you know if the grip on your tyres has dropped by 10%? 5%? What if your engine is performing sub optimally, or if your brakes are 3% more loose than normal? Have you ever missed a speed limit sign as you come off a freeway or motorway? Have you ever realised with a fright that there was something in your blind spot? This is a particularly obvious observation as well. A computer, armed with sensors all around the car is much less likely to miss an environmental or vehicular data point than you are. With smarter environments, you may not need speed limit signs for automated cars. All this is before we factor in distractions, or less than perfect eyesight and hearing, and just unobservant driving. Other observation based professions include security and flight navigation, where computers are already at work.

Reaction time: any driving instructor will tell you that the average reaction time is a tenth of a second for humans. In other words, at 40 mph, you will have covered 17 meters before your brain and body starts to react. By the time you’ve actually slammed the brakes or managed to swerved the car – you may well be 20-25 meters down. By contrast there is already evidence of autonomous vehicles being able to pre-empt a hazard and slow down. Even more so if the crash involves another car using the same shared ‘brain’. There is a lot of thought being given currently to the reaction time of a human take over if the autonomous system fails. This is of course a transient phase, until the reliability of the autonomous system reaches a point where this will only be a theoretical discussion.

Judgement: the problem with our brilliant brains is that we rarely allow them to work to their potential. In the US, in 2015, 35,000 people were killed in traffic accidents. Almost 3500 crashes were caused by distracted driving. Or where the driver is cognitively disengaged. There are an endless number of reasons for why we’re not paying attention when we’re driving. Tiredness, stress, anger, conversing with somebody, or worse, alcohol or being distracted by our phones. There have been studies that show that judges decisions tend to be more harsh as judges get hungry. Great though our brains are, they are also very delicate – and easily influenced. Our emotional state dramatically impacts our judgement. And yet, we often use judgement as a way of bypassing complex data processing. Invaluable where the data doesn’t exist. But with the increasing quantification of the world, we may need less judgement and simply more processing. Such as ‘Hawk Eye’ in tennis and ‘DRS’ in cricket.

Training: how long did it take you to learn to drive? A week? A month? Three? How long did it take you to be a good driver? Six months? Going back to my earlier comments – this needs to be repeated each time for each person. So the collective cost is huge. Computers can be trained much faster and do not need the experiential component one computer at a time. So in any job where you have to replace people, a computer will cut out your training time. This can include front desk operations, call centres, retail assistants, and many more. The time to train an engine such as IBM Watson has already gone from years to weeks.

So while we should agree that the human brain is marvellous for all it can do, it’s important to recognise it’s many limitations. Let’s also remember that the human brain has had an evolutionary head-start of some 6 million years. And the fact that we’re having this discussion suggests that computers have reached some approximation of parity in about 60 odd years. So we shouldn’t be under any illusions about how this will play out going forward. But I wrote this piece to point the out that even as of today, there are so many parameters along which brain already lags behind its silicon and wire based equivalent. A last cautionary point – the various cognitive functions of the brain peak at different points of our lives – some as early as in our 20s and some later. But peak they do, and then we’re on our way down!

Fortunately, for most industries, there should be a significant phase of overlap during which computers are actually used to improve our own functioning. Our window of opportunity for the next decade is to become experts at exploiting this help.

Advertisements

Why Are We Suddenly So Bad At Predicting the Future?

Imagine that a monkey got into the control room of the universe and spent the year clicking random buttons. Imagine him hopping about on the ‘one musician less’ button, stomping on the ‘auto-destruct’ lever and gurgling while he thumped repeatedly on the ‘introduce chaos’ switch. Close your eyes and picture him dropping a giant poo on the bright red panel marked ‘do not touch under any circumstances’. That my friends is the only way to think about 2016 – after all, it was the year of the monkey in the Chinese zodiac. It was the year when rational thinking took a beating, when meritocracy became a bad word, when liberalism escaped from the battlefield to a cave in the mountains to lick its wounds. And not surprisingly, a year when projections, predictions and polls made as much sense in the real world as an episode of Game of Thrones on steroids.
monkey-prediction
Given much of our lives are spent in productively engaging with the future and making decisions based on big and small decisions about the possible future, this last point is more important than just the schadenfreude of laughing at pollsters and would be intellectuals. The present passes too quickly, so really every decision you’ve ever made in your life is counting on future events to turn out in ways that are favourable. Getting this wrong is therefore injurious to health, to put it mildly. And yet our ability to predict the future has never been under such a cloud in living memory. Why is this so?

Fundamentally, we’re wired to think linearly in time, space and even line of sight. We are taught compound interest but we get it intellectually rather than viscerally. When you first encounter the classic rice grains and chessboard problem, as a smart person, you know that it’ll be a big number, but hand on heart, can you say you got the order of magnitude right? i.e. the total amount of rice on the chessboard would be 10x the world’s rice production of 2010? Approximately 461,168,602,000 metric tons? This problem of compounding of effects is incredibly hard to truly appreciate, even before you start to factor in all the myriad issues that will bump the rate of change up or down, or when the curve hits a point of inflexion. The Bill Gates quote  – ‘we over-estimate the impact of technology in 2 years, and under-estimate the impact over 10’ – is a direct reframing of this inability to think in a compound manner.

Then there’s the matter of space and line of sight. The way the future unfolds is dramatically shaped by network effects. The progress of an idea depends on it’s cross fertilisation across fields, geographies and disciplines, across any number of people, networks and collaborations. These collaborations can be engineered to a point or are the result of fortuitous clustering of minds. In his book ‘Linked’ – Ablert-Lazlo Barabasi talks about the mathematician Erdos who spent his life nomadically, travelling from one associates’ home to another discussing mathematics and ironically, network theory. Not surprisingly, a lifestyle also practiced for many years by a young Bob Dylan, if you substitute mathematics for music. Or consider the story of the serial entrepreneur in Rhineland in the 1400s, as told by Steven Johnson, in ‘Where Good Ideas Come From’. Having failed with a business in mirrors, he was working in the wine industry, where the mechanical pressing of grapes had transformed the economics of winemaking. He took the wine press, and married it with a Chinese invention – movable type, to create the worlds first printing press. His name of course, was Johannes Gutenberg. This kind of leap is not easy to predict, not just for the kind of discontinuity they represent (more on that later), but also because of these networked effects. Our education system blinkers us into compartmentalised thinking which stays with us through our lives. Long ago, a student of my mothers once answered a question about the boiling point of water by saying “in Chemistry, it’s a 100 degrees Centigrade, but in Physics, I’m not sure”. We are trained to be specialists, becoming more and more narrow as we progress through our academic career, ending up more or less as stereotypes of our profession. Yet human progress is driven by thousands of these networked, collaborative, and often serendipitous examples. And we live in a world today with ever expanding connections, so it’s not surprising that we have fallen behind significantly in our ability to understand how the network effects play out.

If you want to study the way we typically make predictions, you should look no further than sport. In the UK, football is a year round sport, so there are games every weekend for 9 months and also mid week for half the year. And with gambling being legal, there is an entire industry around football gambling. Yet, the average punter, fan or journalist makes predictions which are at best wilfully lazy. There is an apocryphal story about our two favourite fictitious sardars – Santa Singh and Banta Singh, who decide to fly a plane. Santa, the pilot, asks Banta, the co-pilot to check if the indicators are working. Banta looks out over the wing and says “yes they are, no they aren’t, yes they are, no they aren’t…” – this is how a lot of predictions are made in the world of premier league football today. Any team that loses 3 games is immediately in a ‘crisis’ while a team that wins a couple of games are deemed to be on their way to glory. Alan Hansen, an otherwise insightful pundit and former great player, will always be remembered for his one comment “You can’t win anything with Kids” – which he made after watching a young Manchester United side lose to Aston Villa in the 1995-96 season. Manchester United of course went on to win the season and dominate the league for the next decade and a half. Nobody predicted a Leicester City win in 2016 of course, but win they did. The continuous and vertiginous increase in TV income for football clubs has led to a relatively more equal playing field when it comes to global scouting networks, so a great player can pop up in any team and surprise the league. Yet we find it hard to ignore all the underlying trends and often find ourselves guilty of treating incidents as trends.

The opposite, is amazingly, also true. We are so caught up with trends that we don’t factor in the kinks in the curve. Or to use Steve Jobs’ phrase – the ding in the universe. You can say that an iPhone like device was sure to come along sooner or later. But given the state of the market – with Nokia’s dominance and 40% global market share, you would have bet your house on Nokia producing the next breakthrough device eventually. Nobody saw the iPhone coming, but when it did it created a discontinuous change that rippled across almost every industry over the next decade. The thing is, we like trends. Trends are rational and they form a kind of reassuring continuity so that events can fit our narratives, which in turn reaffirm our world view. And unless we’re close to the event, or perennial change seekers and nomads ourselves, it’s hard to think of countercyclical events. It’s now easy to see how in 2016 we were so caught up in the narrative of progressive liberalisation and unstoppable path to globalisation, we failed to spot those counter-cyclical events and cues that were right there in our path.

In fact there are any number of cognitive biases we are guilty of – on an everyday basis. This article just lists a dozen of them. My favourites in this list are the confirmation bias and the negativity bias. Both of these are exacerbated by social media and digital media. While social media has led us to the echo-chambers – the hallmarks of 2016, our projection bias is also accentuated by our ability to choose any media we want to consume, in the digital world, where access is the easy part. Similarly, bad news spreads faster on social networks and digital media today than at any time before in history. Is it possible that despite knowing and guarding against these biases in the past, we’ve been caught out by the spikes in the impact and incidence of a couple of these, in the digital environment we live in today?
To be fair, not everybody got everything wrong. Plenty of people I know called the Donald Trump victory early in the game. And amongst others, John Batelle got more than his share of predictions right. There is no reason to believe that 2017 will be any less volatile or unpredictable than 2016, but will our ability to deal with that volatility improve? One of the more cynical tricks of the prediction game is to make lots of predictions at many different occasions. People won’t remember all your bad calls, but you can pick out the ones you got right, at leisure! This is your chance, then, to make your predictions for 2017. Be bold, be counter-cyclical. And shout it out! Don’t be demure. The monkey is history, after all. This is the year of the rooster!