So You Think The Brain is Better Than The Computer?

brain-computer

Every discussion on the power of computers is bracketed by the comparison to the human brain and the dwarfing of any known computer by the fantastical power of the human brain. Estimates by Ray Kurzweil suggested a calculations per second (cps) capability of 10 16 or 10 quadrillion cps. And it runs on 20 watts of ‘power’. By comparison (according to this excellent article that everybody should read) the worlds best computer today can do 34 quadrillion cps but it occupies 720 sq meters of space, costs $390m to build and requires 24 megawatts of power.

Besides, that’s just the ‘hardware’ so to say. The brain’s sophistication is far, far ahead of the computers, considering all the miraculous things it can do. We know now that the biggest evolution of the human brain was the growth of the prefrontal cortex, which required a rethink of the interior design of the skull. Also, a key facet of the brain is that it is a neural network – capable of massively parallel processing – simultaneously collecting and processing huge amounts of disparate data. I’m tapping away on a laptop savouring the smell and taste of coffee while listening to music on a cold cloudy day in a warm cafe surrounded by art. The brain is simultaneously assimilating the olfactory, visual, aural, haptic and environmental signals, without missing a beat.

It would appear therefore that we are decades away from computers which can replace brain functions and therefore, jobs. Let’s look at this a little more closely though.

The same article by Tim Urban shows in great detail how the exponential trajectory of computers and software will probably lead to affordable computers with the capacity of a human brain arriving by 2025, and more scarily, achieving the computing capacity of all humans put together by 2040. This is made possible by any number of individual developments and the collective effort of the computer science and software industry. Kevin Kelly points to 3 key accelerators, apart from the well known Moore’s law. The evolution of graphics chips which are capable of parallel processing – leading to the low cost creation of neural networks; the growth of big data, which allows these ever more capable computers to be trained; and the development of deep learning – the layered and algorithmically driven learning process which brings much efficiency to how machines learn.

So the hubris around the human brain may actually survive another decade at best and thereafter the question might not be whether computers can be as good as humans but how much better than the human brain could the computer be. But that has been well argued and no doubt will be so again, including the moral, ethical and societal challenges it will bring.

I actually want to look at the present and sound a note of warning to all those still in the camp of ‘human brain hubris’. Let me start with another compliment to the brain. Consider this discussion between two friends meeting after ages.

A: how have you been? What are you doing nowadays?

B: I’m great, I’ve been playing chess with myself for ages now.

A: Oh? How’s that? Sounds a bit boring.

B: Oh no, it’s great fun, I cheat all the time.

A: But don’t you catch yourself?

B: Nah, I’m too clever.

One of the most amazing thing about the brain is how it’s wired to constructively fool us all the time. We only ‘think’ we’re seeing the things we are. In effect, the brain is continuously short circuiting our complex processing and presenting simple answers. This is brilliantly covered by Kahneman, and many others. Because, if we had to process every single bit of information we encounter, we would never get through the day. The brain allows us to focus by filtering out complexity through a series of tricks. Peripheral vision, selective memory, and many other sophisticated tricks are at play every minute to allow to function normally. If you think about it, this is probably the brains’ greatest trick – in building and maintaining this elaborate hoax that keeps up the fine balance between normalcy and what we would call insanity. Thereby allowing us to focus sharply on specific information that needs a much higher level of active processing.

And yet, put millions of all of these wonderful brains together, and you get Donald Trump as president. You get Brexit, wars, environmental catastrophy, stupidity at an industrial scale, and a human history so chockfull of bad decisions that you wonder how we ever got to here. (And if you’re pro Trump then consider that even more people with the same incredible brain voted for Clinton). You only have to speak with half a dozen employees of large companies to collect a legion of stories about mismanagement and how the intelligence of organisations is often considerably less than the sum of the parts. I think it would be fair to say that we haven’t yet mastered the ability to put our brains together in any kind of reliably repeatable and synergistic way. Very much in trial and error mode here.

This is one of the killer reasons why computers are soon going to better than humans. In recent years, computers have been designed to network, to share, pool and exchange brain power. We moved from the original mainframe (one giant brain), to PCs (many small brains), to a truly cloud based and networked era (many, connected brains working collectively, much, much bigger than any one brain). One of the most obvious examples is blockchain. Another is in the example of the driverless car. Now, most of you might agree that as of today you would rather trust a human – (perhaps yourself) rather than a computer at the wheel of your car. And you may be right to do so. But here are two things to ponder. Your children will have to learn to drive all over again, from scratch. You might be able to give them some guidance, but realistically may be 1% of your accumulated expertise behind the wheel will transfer, from your thousands of driving hours. Also, let’s assume you hit an oil slick on the road and almost skid out of control. You may, from this experience, learn to recognise oil slicks, deal with them better, perhaps learn to avoid them or slow down. Unfortunately, only one brain will benefit from this – yours. Every single person must learn this by experience. When a driverless car has a crash today because it mistakes a sky blue truck for the sky, it also learns to make that distinction (or is made to). But importantly, this ‘upgrade’ goes to every single car using the same system or brain. So you are now the beneficiary of the accumulated learning of every car on the road, that shares this common brain.

Kevin Kelly talks about a number of different kinds of minds / brains that might ensue in the future, that are different from our own. But you can see a very visual example of this in the movie – Live Die Repeat – where the protagonists must take on an alien that lives through it’s superbrain – which is all seeing. It gets better. If, like the airline industry, automotive companies agree to share this information – following every accident or near-miss, then you start to get the benefit of every car on the road, irrespective. Can you imagine how quickly your driverless car would start to learn? Nothing we currently know or can relate to prepares us for this exponential model of learning and improvement.

It’s not just the collective, though. The super-computer that is the brain, fails us in a number of ways. Remember that the wondrous brain is fantastic as the basic hardware and wiring, and possibly, if you will allow me to extend the analogy, the operating system. Thereafter, it is the quality of your learning, upkeep and performance management that takes over, and this where we as humans start to stumble. Here are half a dozen ways in which we already lag behind computers:

Computation: This is the first and the most obvious. Our computational abilities are already infinitesimally small compared to the average computer. This should require no great elaboration. But when you apply it to say, calculating the speed of braking to ensure you stop before you hit the car that’s just popped out in front, but not so fast that you risk being hit by the car behind you, you’re already no match for the computer. Jobs that computers have taken over on the basis of computation include programmatic advertisement buying, and algorithmic trading. Another type of computation involves pattern recognition – for example checking scans for known problems, as doctors do.

Observation: Would you know if the grip on your tyres has dropped by 10%? 5%? What if your engine is performing sub optimally, or if your brakes are 3% more loose than normal? Have you ever missed a speed limit sign as you come off a freeway or motorway? Have you ever realised with a fright that there was something in your blind spot? This is a particularly obvious observation as well. A computer, armed with sensors all around the car is much less likely to miss an environmental or vehicular data point than you are. With smarter environments, you may not need speed limit signs for automated cars. All this is before we factor in distractions, or less than perfect eyesight and hearing, and just unobservant driving. Other observation based professions include security and flight navigation, where computers are already at work.

Reaction time: any driving instructor will tell you that the average reaction time is a tenth of a second for humans. In other words, at 40 mph, you will have covered 17 meters before your brain and body starts to react. By the time you’ve actually slammed the brakes or managed to swerved the car – you may well be 20-25 meters down. By contrast there is already evidence of autonomous vehicles being able to pre-empt a hazard and slow down. Even more so if the crash involves another car using the same shared ‘brain’. There is a lot of thought being given currently to the reaction time of a human take over if the autonomous system fails. This is of course a transient phase, until the reliability of the autonomous system reaches a point where this will only be a theoretical discussion.

Judgement: the problem with our brilliant brains is that we rarely allow them to work to their potential. In the US, in 2015, 35,000 people were killed in traffic accidents. Almost 3500 crashes were caused by distracted driving. Or where the driver is cognitively disengaged. There are an endless number of reasons for why we’re not paying attention when we’re driving. Tiredness, stress, anger, conversing with somebody, or worse, alcohol or being distracted by our phones. There have been studies that show that judges decisions tend to be more harsh as judges get hungry. Great though our brains are, they are also very delicate – and easily influenced. Our emotional state dramatically impacts our judgement. And yet, we often use judgement as a way of bypassing complex data processing. Invaluable where the data doesn’t exist. But with the increasing quantification of the world, we may need less judgement and simply more processing. Such as ‘Hawk Eye’ in tennis and ‘DRS’ in cricket.

Training: how long did it take you to learn to drive? A week? A month? Three? How long did it take you to be a good driver? Six months? Going back to my earlier comments – this needs to be repeated each time for each person. So the collective cost is huge. Computers can be trained much faster and do not need the experiential component one computer at a time. So in any job where you have to replace people, a computer will cut out your training time. This can include front desk operations, call centres, retail assistants, and many more. The time to train an engine such as IBM Watson has already gone from years to weeks.

So while we should agree that the human brain is marvellous for all it can do, it’s important to recognise it’s many limitations. Let’s also remember that the human brain has had an evolutionary head-start of some 6 million years. And the fact that we’re having this discussion suggests that computers have reached some approximation of parity in about 60 odd years. So we shouldn’t be under any illusions about how this will play out going forward. But I wrote this piece to point the out that even as of today, there are so many parameters along which brain already lags behind its silicon and wire based equivalent. A last cautionary point – the various cognitive functions of the brain peak at different points of our lives – some as early as in our 20s and some later. But peak they do, and then we’re on our way down!

Fortunately, for most industries, there should be a significant phase of overlap during which computers are actually used to improve our own functioning. Our window of opportunity for the next decade is to become experts at exploiting this help.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s