The chief economist of the Bank of England, Andy Haldane, warned recently that artificial intelligence (AI) will have a significant impact on many jobs in Great Britain.1 Deutsche Bank CEO John Cryan stated that a ‘large number’ of Deutsche Bank’s 100,000 strong workforce will fall victim to automation.2 Scientists from the Department of Engineering at the University of Oxford have analysed 702 occupations, ranked them according to their probability of computerisation and concluded that 47% of total US employment is at risk in the next decade or two.3 Author Jeremy Rifkin calls this the ‘end of work’ in his book4 of the same title, writing that ‘today all sectors of the economy are experiencing technological displacement, forcing millions onto the unemployment rolls’. This does not sound too promising, yet the term ‘technological unemployment’ is hardly a new one.5
Historically, technological innovations have been met with concern and resistance. In 1589, William Lee requested a patent for his stocking frame knitting machine. Queen Elizabeth refused to grant it,6 because of concern related to the loss of employment. And in 1779, when the UK parliament revoked a law from 1551 that prohibited the use of gig mills in the wool refinery trade, riots erupted with such force that the government had to send in 12,000 troops to defuse the situation.7
But resistance to technological innovations did not prevent a decline of Americans working in agricultural from 90% in 1800 to 2% in 2000. This was due first to the invention of the coal-powered steam engine and later to the oil-powered internal combustion engine. The impact was dramatic but over time the affected population moved into the cities, found work in newly erected factories and exchanged horses for a Ford Model T. That’s largely the argument today, too. New technology will spawn new business concepts to accommodate all those losing their jobs to robots and AI. But before we examine that argument, let’s answer this question first: AI, why now?
To understand what is happening, we must consider two concepts. The first, Moore’s Law, is familiar to everyone in information technology (IT), so much so that it almost became a cliché. Gordon Moore, co-founder of Intel, predicted in 1965 that the number of transistors in integrated circuits would double every 12 months. Later modified to 18 months, this time frame is today accepted as a correct description of the rate of progress.
The second concept involves the well-known legend of the chessboard, in which the emperor offers a reward of choice to his loyal servant. The servant requests as much rice as is produced when doubling the number of grains on each of the 64 squares on a chessboard (one grain on square one, two grains on square two and so on). It starts modest and remains so, for an Emperor that is, until about square 32 (a few billion grains of rice), but from there on the increase become truly staggering. It is estimated that by square 64 the amount of rice would cover the globe about one meter deep. The point to be made here is that growth that seems linear at first can in fact be exponential once a certain threshold is met.
So, AI, why now?
The parable of the chessboard and Moore’s law intersected in 2006. Erik Bryonjolfsson and Andrew McAfee8 of MIT took the first use of the term ‘information technology’ in the Harvard Business Journal of 1958 as starting point. Assuming a doubling of processing power every 18 months, we entered the second half of the chessboard 14 years ago in 2006. This means we are at the dawn of a truly new age, and it is hard to overstate the possibilities, both good and bad, of what lies ahead. Ray Kurzweiler9 predicts that ‘Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity — technological change so rapid and profound it represents a rupture in the fabric of human history.’
Most of us have stepped onto the second half of the chessboard oblivious of doing so, but that AI is already amongst us is clear. Google, Apple’s Siri, spam filters, Netflix recommendations, Facebook feeds, shopping recommendations on Amazon; everything powered by AI, not to mention, all the apps on your phone that track not only you and your activities but also your frineds’.
The reason that AI has come to the public’s awareness in the last 5 years (or so) is the now familiar exponential growth in processing power combined with the massive amount of data we produce. Without it, neural network-based machine learning would be impossible. Add to that our equally exponential willingness to adopt new technologies and it becomes clear why we have started noticing AI in our lives. It took Facebook 3.5 years to acquire 50 million customers, Whatsapp 15 months and Angry Birds only 15 days.10
Bryan Arthur13 calls AI a second economy which exists in parallel to the physical economy. He uses the attributes of vast, silent, connected, unseen, autonomous, self-configuring, self-organizing and self-healing to describe it. One might want to add never tired, never on leave and never ill or on strike.
But it’s not only the various social media that collect our data. Less obvious is the collection of our passive digital footprint which is harvested by companies and governments without much public awareness. This includes (but is not limited to) our location data, sometimes down to the level of how long we look at a certain product in a store or our facial expressions and emotions when watching anything on a smart TV, tablet or phone with a suitable camera.
Meanwhile computer-based personality judgments have become far more accurate than those of even close relatives.11 This means that, with just a couple of clicks, Facebook knows you better than your spouse, family or friends.
If that seems innocent enough consider this: Data mining, or the merging and harvesting of countless data streams, has opened the door to what is called ‘anticipatory intelligence’, the newest frontier of big data. This technology has predicted critical societal events with astounding accuracy12 since 2014 and can be easily exploited to manipulate, say, an election (and no, this is not a reference to Russia).
Meanwhile, AI pushes the frontiers on the literal playing fields. In 1996-1997, IBM’s Deep Blue defeated the reigning world chess champion Garry Kasparov. Only 4 years later, IBM’s Watson won the TV game show Jeopardy against the two best human players. Watson has enormous potential for health care as it can hold information about every known illness and medicine in its databanks, which can be linked to and fed with the newest research and medical statistics from any hospital on the planet. If you thought doctors to be exempt from technological unemployment, think again as doctors, physicians and pharmacists are prime candidates to be substituted by AI.
Meanwhile Liberatus, an AI robot, has won $1.5 million in a 3-week tournament against four of the world’s best poker players in 2017. And even the ancient Chinese board game Go, considered significantly more complex than chess, was no match for Google’s AlphaGo. In 2016, AlphaGo, and in 2017, Deep Mind, overwhelmed many of the world’s best Go players with unconventional manoeuvres that astonished the experts. And mind you, the algorithms’ Go mastery was completely self-taught, without any programming or human intervention.
That AI has defeated humans in several games is viewed by some as mere PR stunts, but other advances are less publicized. AI has entered the global stock markets, not as a commodity but as an intelligent algorithm, replacing human traders. A Hong Kong venture capital firm14 has appointed an algorithm called VITAL to its board. VITAL gets to vote whether or not to make an investment like any of board members, but it makes its selection by analysing huge amounts of data. Not surprisingly, VITAL is biased towards investments in companies which deploy AI.
Lawyers who, contrary to what we see on TV, spend most of their time researching precedence cases and loopholes in the law, are being replaced by AI capable of scanning through more than 500,000 documents in just hours. And scientific research15 shows that lawyers, judges and detectives achieve no better than chance levels of accuracy when it comes to lie-detection. Because lying involves different brain areas from those that are activated when telling the truth, it is conceivable that functional magnetic resonance imaging (fMRI) scanners soon will function as ultimate lie detectors. Once the size and price of these devices reach mass-market levels, AI might eliminate the need for vast swatches of law enforcement.
Transportation and logistics (think self-driving) and the bulk of office and administrative support workers are due to be substituted by automation anytime soon. Considering aviation, the reason why we still see pilots in modern airplanes is purely psychological, or who would really board a plane without a pilot?
Millions of people working in call centres across the world already are under pressure due to the ongoing automation of many related business processes. This, in combination with the steep learning curve of natural language processing (NLP) capabilities and speech recognition, has led to a complete overhaul of the industry. Before this article is posted, it goes to an editor, another group soon to be extinct, a fate shared by linguists and translators. They are already competing against cheap apps on our phones which are getting better much faster than experts thought possible just a short while ago.
The intent here is not to paint a doomsday scenario but to demonstrate that where we as a society have accepted at least to some degree that the workforce moves ahead of mechanisation, this trend is not, and cannot be, open ended. When farming became mechanised, large parts of the workforce moved to manufacturing jobs. When robots held entry into the fabrics and warehouses, the workforce moved, and is still moving, into the service sector. But now many of the business processes in the service sector are being replaced by intelligent algorithms capable of cognitive computation that so far has been the domain of the white collar worker.
So what is the next evolution? As stated at the beginning, the current consensus amongst politicians seems to be that the new technology will spawn enough new businesses to compensate for all those jobs that will be lost to robots and AI. Given this technology’s all-pervasiveness scientists and business leaders agree that the impact will be tremendous and should be managed with care if we are to avoid the social and economic costs of massive unemployment.
What has this all to do with behaviour analysis? As we enter this new brave world we have to realize how much power we give to the new technologies that are all around us today, how well they already know us after just a couple of clicks and how easy it is for AI to predict and control what we think and do. Awareness of the own behaviours, drives, motivations or even awareness of the roots of our own intentions (manufactured or genuine) are crucial to prevent being manipulated into herd-behaviour determined by algorithms designed to maximize profit at all costs. The choice is yours.
References:
- Kamal, A. (2018). Bank of England chief economist warns on AI jobs threat [online]. Available at: https://www.bbc.co.uk/news/business-45240758 [Accessed 02. Sept. 2018].
- Deutsche boss Cryan warns of big number of job losses from tech change [online]. Available at: https://www.ft.com/content/62ee1265-dce7-352f-b103-6eeb747d4998 [Accessed 02. Sept. 2018].
- Frey, C. B., & Osborne, M. A. (2017). The future of employment: how susceptible are jobs to computerization? Technological forecasting and social change, 114, pp. 254-280.
- Rifkin, J. (1995). The end of work: The decline of the global labor force and the dawn of the post-market era. GP Putnam’s Sons, 200 Madison Avenue, New York, NY 10016.
- Keynes, J. M. (2010). Economic possibilities for our grandchildren. In: Essays in persuasion(pp. 321-332). Palgrave Macmillan, London.
- Robinson, J. A., & Acemoglu, D. (2012). Why nations fail: The origins of power, prosperity and poverty. Crown Business, New York.
- Mantoux, P. (2013). The industrial revolution in the eighteenth century: An outline of the beginnings of the modern factory system in England.
- Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. WW Norton & Company.
- Kurzweil, R. (2004). The law of accelerating returns. In Alan Turing: Life and legacy of a great thinker (pp. 381-416). Springer, Berlin, Heidelberg.
- Advances in technology [online]. Available at: http://climateerinvest.blogspot.com/2015/12/blackrock-on-advances-in-technology.html [Accessed 02. Sept. 2018].
- Youyou, W., Kosinski, M., & Stillwell, D. (2015). Computer-based personality judgments are more accurate than those made by humans. Proceedings of the National Academy of Sciences, 112(4), pp. 1036-1040.
- Doyle, A., Katz, G., Summers, K., Ackermann, C., Zavorin, I., Lim, Z., … & Lu, C. T. (2014). Forecasting significant societal events using the embers streaming predictive analytics system. Big Data, 2(4), 185-195.
- Arthur, W. B. (2011). The second economy. McKinsey Quarterly, 4, pp. 90-99.
- Wile, R. (2014). A venture capital firm just named an algorithm to its board [online]. Available at: https://www.businessinsider.com/vital-named-to-board-2014-5?international=true&r=US&IR=T [Accessed 02. Sept. 2018].
- Vrij, A., & Mann, S. (2001). Who killed my relative? Police officers’ ability to detect real-life high-stake lies. Psychology, Crime & Law, 7(2), pp. 119-132.