There is a great deal of excitement around artificial intelligence (AI) at the start of the 2020s. Gone are the days when the subject was just the preserve of computer scientists and phycologists. Suddenly everyone is interested, including artists, authors, businesspeople, economists, engineers, politicians, scientists, and social scientists. This blossoming in the popularity of AI is welcome, but we should be wary not to believe all that we hear or read about it.

A common misconception is that AI is new. Yet Alan Turing first published the notion of computers that can think in 1950. In the same article, he proposed the Imitation Game, now widely referred to as the Turing Test, to evaluate whether a computer system displays intelligence. If, in a natural-language typed conversation, you cannot tell whether you are chatting with a computer or a person, then the computer has passed the test.

Six years later, the Dartmouth Conference on AI was held at Dartmouth College, NH. The proposal for the conference, issued in 1955, was the first published use of the phrase “artificial intelligence”. The conference itself was a loose gathering of experts who exchanged ideas during the summer of 1956. During the decades that followed, many practical and useful applications of AI were developed. They were mostly focused on explicit knowledge-based representations of AI. This family of AI techniques included so-called expert systems that captured expert knowledge as a form of advisory assistant in specialist domains like medical diagnosis, spectrometry, mineral prospecting, and computer design. Their popularity peaked in the 1980s, but they are still an important technique today.

At the same time, a separate family of AI research had been focused on models inspired by the neurons and interconnections of the brain. These data-driven models of AI had been of only academic interest until 1985, when a practical combination of an artificial neural network structure with an effective learning algorithm was published. (Specifically, the back-error propagation algorithm was shown to train a multilayered perceptron.) That breakthrough led to another wave of excitement around these structures that could effectively learn to classify data, guided by experience from training examples.

So, what has happened to cause such excitement 35 years after that breakthrough and 70 years after Turing’s article? I can think of five main reasons. First, the two broad families of AI models have matured and improved through iterative development. Second, there have been some more recent developments, such as deep-learning algorithms, which are a newer and more powerful type of neural network for machine learning. Third, the rise of the Internet has enabled any online system to access vast amounts of information and to distribute intelligent behaviours among networked devices. Fourth, huge quantities of data are now available to train an AI system. Finally, and perhaps most importantly, the power of computer hardware has improved to the extent that many computationally demanding AI concepts are now practical on affordable desktop computers and mobile devices.

Much of the current interest in AI is focused on machine learning. The principle is simple: show a large artificial neural network thousands of examples of images or other forms of data, and it will learn to associate those examples with their correct classification. Crucially, the network learns to generalize so that, when presented with an image or data pattern that it has not seen before, it can reliably classify it provided that similar examples existed in the training set. This is a powerful technique that enables a driverless car, for example, to recognize a Stop sign in the street. However, it is important to remember that the algorithm will not understand what that classification means. To go beyond a simple classification label requires knowledge-based AI. That’s the same AI that has its roots in the expert systems that started to evolve in the decades following the Dartmouth conference.

So, any practical AI system today needs to use a mixture of techniques from the AI toolbox. AI systems can now perform amazing feats faster and more reliably than a human. Nevertheless, we should not get carried away. Any current AI is always confined to a narrow task and has little or no conceptualization of what it does. So, despite some very real and exciting AI applications, we are still a long way from building anything that can mimic the broad range of human intelligence convincingly. Furthermore, our current models of AI are not leading in that direction. A new and unexpected model could take us by surprise at any time, but I don’t expect to see it in my lifetime.

Even within its current limitations, AI is starting to transform the workplace. It is already assisting professionals in pushing the boundaries of their specialism. Examples range from cancer care to improved business and economic management. AI also has the potential to remove dull and repetitive jobs. There is the tantalizing possibility of a new world order in which we work less and enjoy more leisure and education. Such a possibility creates its own challenges though, as it will require us to re-structure our societies accordingly.

The COVID-19 pandemic of 2020–21 has shown that societies are capable of sweeping changes, as livelihoods have changed and homeworking has become the norm for many. AI has shown its value by supporting scientists in COVID-19 applications that include repurposing existing drugs, diagnostic interpretation of lung images, recognizing asymptomatic infected people, identifying vaccine side-effects, and predicting infection outbreaks.

While the human-like qualities in AI are still rather shallow, now is the right time to grapple with the bigger ethical questions that will arise as its capabilities grow. Even now, there are questions over the degree of autonomy and responsibility to grant to an AI. Further, if it goes wrong, who will be responsible? If we ever create an AI that is truly human-like, will it have its own rights and responsibilities? While we are a long way from that scenario, now is the time to debate, discuss, educate, and legislate for a future world in which AI is a dominant player.

Adrian Hopgood is a Professor of Intelligent Systems and Director of Future & Emerging Technologies

This piece was originally published in Towards an International Political Economy of Artificial Intelligence