May 21, 2018

Artificial Intelligence: A Prize Wrapped Within a Social Conundrum

Information and Data Imagery

Artificial intelligence promises a bright future but getting there isn’t without challenges. A 2016 White House report stated that AI-driven automation technology could potentially increase economic inequality by amplifying the wage gap between less-educated and more-educated workers. A Pew Research Center survey in 2015 found that while 65 percent of Americans believe that robots and computers will “definitely” or “probably” take over human jobs in the next 50 years, 80 percent don’t think that their own jobs will be at risk. Unfortunately, our personal biases don’t make us great at prediction. If governments don’t guide society through this transition, we will see periods of strong dislocation including acute inequality and significant unemployment. 

Below I explore three key trends that make this a particularly challenging transition for governments to navigate.

Technological Change Has Hit Warp Speed

Technology advancement in both its development and diffusion is accelerating. A Harvard Business Review article contrasts the rise of different innovations: whereas the landline telephone took decades to reach 50 percent penetration of households, cell phones reached the same level within only five years. We can expect a similar accelerated rate of diffusion of automation technology and AI in the coming years. The problem is that our labor markets and underlying systems for education, economic policy, and organizational structures are far more rigid. This will lead to a great divide between those that are fluent in a new tech reality versus those unable to adapt quickly enough. In the age of computing, Moore's Law observed the doubling of transistors on integrated circuits every two years; in the age of automation, we’ll likely talk about labor displacement doubling every ten years as a result of AI (let's call this the Luddite's Law). In fact, the McKinsey Global Institute ran a scenario analysis suggesting half of today’s work activities could by automated by 2055.

It's About the Economics

Economics is the second force that will exasperate the automation technology disruption. Ryan Avent (a senior editor at the Economist) recently stated in a Quartz interview: “Because machines are increasingly good at doing what humans will do, the only jobs that the average human will really be able to get are those where it’s attractive to hire them because they’re really cheap.” Governments must carefully balance policy: while increasing minimum wage (as the states of California, New York, and Washington have done) is socially commendable, it makes investment in automation technology more appealing. Each industry will have its own economic tipping point with automation, but one thing is for certain: it will become cheaper and better, quickly. According to the Boston Consulting Group, it costs barely $8 an hour to use a robot for spot welding in the US auto industry, compared to $25 for a worker. 

We’re Now in Deep…Data

Digitization, the Internet of Things, and big data have created an increasingly complex repository of data unmanageable for any human. During the Industrial Revolution, machines were built for power and force. This new age of automation technology goes from horsepower to brainpower, and from mining gold to mining data. The availability of data and inputs has led to a rapid evolution from simple automation technology for routine and linear tasks to cognition-based activities. Computers have evolved from beating humans at relatively complex games like chess and Jeopardy, to outperforming doctors in predicting heart attacks. With the proliferation of smart devices, cheap sensors, cloud infrastructure and the accessibility of the internet, there is no end in sight for a relief in data interpretation. We will increasingly rely on machines to tackle this deluge of information, which will only make them smarter and better equipped to perform human-based tasks.

Different Times Require Different Measures

Dealing with the likely disruptions brought on by the widespread infiltration of artificial intelligence is not a trivial task and requires new approaches from policymakers. Below are a few recommended principles that can advance collaborative and innovative frameworks in managing the future of automation and AI:

- Protect livelihoods while retraining for new opportunities. Governments must take stock of their human talent pool and create an adequate environment for labor markets to evolve. They must first diagnose those industries most likely to be impacted by automation technology and chart a path to retrain workers for higher value-added functions. In parallel to this, they must create adequate safety nets that encourage an economic transition. According to OECD data, during 2015, Sweden spent nearly 2 percent of GDP on public services aimed at retraining or facilitating job transitions versus US spending of only 0.3 percent of GDP. Sweden’s policy sets the right tone and environment for a labor market evolution. As quoted in the New York Times, the Swedish minister for employment and integration said: “In Sweden, if you ask a union leader, ‘Are you afraid of new technology?’ they will answer, ‘No, I’m afraid of old technology.’ The jobs disappear, and then we train people for new jobs. We won’t protect jobs. But we will protect workers.” Beyond traditional welfare programs there are innovative programs like universal basic income and robot taxation that should be part of the dialogue.

- Encourage responsible data consumption while supporting innovation. Despite potential disruption, AI-driven innovation has the potential of becoming a significant economic driver if properly managed. Policymakers must carefully weigh the consequences of regulation in the future of AI applications. The GDPR (General Data Protection Regulation) coming into effect in Europe on May 25, 2018, will have deep impacts on the expansion of AI as it pertains to data. In short, the law harmonizes data protection laws across EU members but more importantly it also introduces new requirements: the law protects individuals against misuse of their personal data and gives them control over how this data is used. All entities servicing EU member citizens will be impacted, which de facto makes this a global issue. How does this impact AI? The use of big data, which is the virtual fuel behind the power of AI, could be severely limited. This law has many complexities including interpretation, measurability, and compliance monitoring, but the months and years to come will leave a deep imprint on the future of the digital economy, and AI is no exception.

- Serve as a catalyst for collective debate about the role of AI. Governments are uniquely positioned to facilitate the discussion on AI. Similar to other cutting-edge technologies like genomics, AI challenges established social paradigms. This warrants a broad discussion and understanding of its ethical uses and applications across all facets of society: the academic, public, and private sectors, as well as civic society must collectively establish the roles, boundaries, and limitations of the technology. To do so, governments must place dedicated resources to shepherd these efforts. The United Arab Emirates has paved the way in this respect: in 2017 it appointed its first Minister of State for Artificial Intelligence. The UAE subsequently hosted the inaugural Global Governance of AI Roundtable, which assembled experts from over 20 countries to discuss implications of AI for government, business, and society. The AI Initiative of The Future Society at Harvard Kennedy School was invited to contribute to this conversation. Hopefully this is the first of many examples to come.

The time is ripe for constructive discussions on how to adapt to the realities of an AI future. Governments are uniquely positioned to moderate these discussions and chart a balanced path toward efficiency and inclusive prosperity.

The views expressed in the Government Innovators Network blog are those of the individual author(s) and do not necessarily reflect those of the Ash Center for Democratic Governance and Innovation, the John F. Kennedy School of Government, or of Harvard University.

Related Topics

Related Topics