Picture of a robot

Professor Adrian Hopgood, Professor of Intelligent Systems explains that complementary artificial intelligence (AI) technologies can help to tackle the challenges of transparency and explainability.

The opacity of AI is sometimes cited as a shortcoming that results in a lack of trust and a difficulty in scrutinising the outputs. If that opacity is genuinely an AI problem, then AI can also provide a solution. In other words, AI technologies that are inherently transparent can be used to illuminate the opaque ones.

The definition of AI, according to UK Research and Innovation (UKRI), is “… a suite of computational technologies and tools that aim to reproduce or surpass abilities of humans to undertake complex tasks…” This is a broad definition that, significantly, mentions multiple technologies, complex tasks, and human capabilities. Yet much of the focus is currently on a single tool, i.e., machine learning, performing tasks that are narrow rather than complex, with no reference to human performance.

Machine learning is a process that involves finding patterns in datasets. It is a powerful tool that has driven much of the current interest in AI, but its mechanisms are opaque. Typically, machine learning is used for classification tasks. The algorithm draws its conclusion based on the best match from previously seen examples. It has no concepts surrounding a decision that it takes; it simply applies a label to a data item.

We need to get away from using a single-technology version of AI and complaining about its opacity. Instead, we can use the full AI toolkit, comprising concept-rich knowledge-based systems alongside data-driven algorithms. That way, we can progress towards the original definition of AI while addressing the issues of transparency and explainability.

Professor Adrian Hopgood, Professor of Intelligent Systems

Before the current explosion of interest in machine learning, other forms of AI were already at a state of maturity. Many of these techniques were knowledge-based AI, in which a computer system is designed to capture human expertise. The concepts that are being reasoned about are explicit and textual. Crucially, these techniques are intrinsically transparent, as a chain of logic can be viewed that leads from a set of input information to the conclusions drawn.

Knowledge-based AI and machine learning are not exclusive choices, as each can be used the complement the other. As an example, consider the interpretation of medical X-rays. A machine learning system can learn from thousands of examples to locate fractures. Yet the system has no concept of ‘fracture’, ‘patient’ or ‘image’. The algorithm is just labelling data, based on prior examples. A human clinician, on the other hand, will use other clues beyond the pixels of the X-ray image, such as the patient’s age and strength, the angles between specific joints, and the bone mobility. All this expertise can be captured within a set of knowledge-based software agents that hold explicit concepts about the patient and their anatomy, as well as the images and equipment. Such agents can sit alongside a machine-learning algorithm to verify that its outputs are consistent with medical knowledge. Further, they can provide a logical argument for why the conclusion makes sense. Conversely, they can flag any questionable decisions from an algorithm.

In conclusion, we need to get away from using a single-technology version of AI and complaining about its opacity. Instead, we can use the full AI toolkit, comprising concept-rich knowledge-based systems alongside data-driven algorithms. That way, we can progress towards the original definition of AI while addressing the issues of transparency and explainability.

 

The blog is taken from written evidence accepted and published in November 2022 by the UK Parliament Commons Select Committee on Science, Innovation and Technology concerning the governance of artificial intelligence.

Related blogs