The promise and threat of artificial intelligence

I finished last month’s column by referring to what many people think will revolutionise everyday life in the next few decades: artificial intelligence (AI). In fact, it is claimed that AI will match human intelligence by 2045. AI has been around for a number of years. In 1997, Garry Kasparov was beaten in a game of chess by a computer and since then the increases in computing power have enabled large quantities of data to be analysed, patterns to be extracted and solutions to be derived.

Recently, AI has captured people’s interest. Two film releases, Ex Machina and Her, have the human/machine interface central to their plots. Driverless cars have been in the headlines. At the beginning of March, AlphaGo, a computer programme, beat the world’s number one ‘Go’ player at the board game. This was seen as a big leap forward for AI. Back in 1997, the chess-winning algorithms would examine every permutation, looking many moves ahead, and then chose the one with the highest probability of winning. Go has many more possible moves than chess and, even with today’s processing power, it is not appropriate to use this approach. Instead, neural networks are used to allow the computer to learn by analysing the matches of the best human Go players and then to refine this learning further by playing against itself. AlphaGo was designed by a British company, DeepMind, which was bought by Google as part of its investment in AI.

The success of AlphaGo was heralded as a new beginning in AI: if a computer can teach itself to be superior to humans at Go, then what is there to stop it from being superior at everything else? It didn’t take long for a reality check to gatecrash the party. Later in March, Microsoft launched a tweeting ‘chatbot’ called Tay, built to speak like a teenage girl. Over the course of interactions with the public, Tay was designed to learn how to speak like a human. Unfortunately, it learnt from the wrong people and had to be taken offline because it was becoming too racist and sexist.

This highlights a potential Achilles heel of the neural network approach. As is the case with humans, it is only as good as the quality of the information it uses for learning and this can be beaten by the unexpected. In the case of Tay, the developers had made a presumption about how the public would respond and hence on the content that would allow Tay to learn. With hindsight, many observers pointed out that the actual response was not unexpected, with the Daily Telegraph quoting a professor from Bristol University as saying: “Have you seen what many teenagers teach to parrots?” Google’s DeepMind AlphaGo learnt from the best humans and from playing itself and was said by observers to generate creative moves in particular situations. When human Lee Sodol won the fourth game, he did so by playing what was referred to in Go terms as a ‘divine move’ – a truly inspired, non-obvious original move. AlphaGo was also defeated by the unexpected.

Opinion on AI is split into two camps: the optimists who see all the potential applications, with AI removing the need for humans, and the pessimists, who warn that AI could take over and be the downfall of the human race. For the time being at least, AI still needs a helping hand from a human. At the end of April this year, Tim Peake, the UK’s astronaut on the International Space Station, remotely manoeuvred a robot rover over a simulated Mars terrain and cave located in Stevenage. The rover has the ability to plan its own route over a surface and to steer itself. However, it finds it difficult to cope with shadows created by craters and caves and so it is more efficient to have human input.

It does not take a great leap of imagination to envisage AI encountering similar difficulties in the analysis of NDT data, with the sudden appearance of the unexpected. Equally, it is possible to see the benefits that AI could bring in assisting the operator to sift through large amounts of data and providing the operator with sufficient information on which to make a final decision. Back when Kasparov was beaten, it was possible to examine the process through which the computer arrived at its decisions. Today’s AI systems undertake their own learning and it is much more difficult to follow the process that generates the solution they spit out. For critical safety judgements, it may not be the best idea to rely on a decision when the basis for that decision is not known. In his book Team of Teams: New Rules of Engagement for a Complex World, General Stanley McChrystal said that: “As complexity envelops more and more of our world, even the most mundane endeavours are now subject to unpredictability…”. AI is coming to NDT, but the human operator will be sticking around for a long time to come.


Please note that the views expressed in this column are the author’s own personal ramblings for the purpose of encouraging discussion within the NDT Newspaper. They do not represent the views of Amec Foster Wheeler or the HSE who funded the PANI projects.

Letters can be mailed to The Editor, NDT News, Newton Building, St George’s Avenue, Northampton NN2 6JB. Fax: 01604 89 3861; Email: ndtnews@bindt.org or email Bernard McGrath direct at bernard.mcgrathfw@amec.com

Comments by members

This forum post has no comments, be the first to leave a comment.

Submit your comment

You need to log in to submit a Comment. Please click here to log in or register.

<< Back