Level of concern relating to unregulated AI advances

It has recently been reported that Elon Musk has warned that there is now a situation where there is out-of-control development of artificial intelligence (AI) and this could “pose profound risks to society and humanity.” It looks like I will have to dust off the analogue ultrasonic testing (UT) set; no hardship there.

There is a push from Mr Musk and thousands of other academics and tech industry figures, who have each signed an open letter demanding that “all AI labs… immediately pause” work on advancing AI and called for governments to temporarily ban further research if they do not.

This has come about from Mr Musk and other signatories, including Apple Co-Founder Steve Wozniak and the Head of the Doomsday Clock, becoming alarmed by the recent rapid advances in AI.

The letter contains such phrases as: “out of control” development by “unelected tech leaders” could lead to the development of “non-human minds that might eventually outnumber, outsmart, obsolete and replace us.” Further advances “risk loss of control of our civilisation” unless proper checks and balances are put in place.

Concern has grown after the recent public success of ChatGPT. Academics at Microsoft, which has invested in the technology, recently said the latest version of the software, GPT-4, was showing signs of approaching human-level intelligence.

The open letter urged “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4” and called for governments to ban further AI research along those lines.

AI regulation has not kept pace with the recent explosive growth in the technology. Mr Musk and his peers want industry standards to be developed to stop the risk of AI running out of control.

Other signatories to the online open letter include researchers from OpenAI’s rivals, Google Deepmind, and the President of the Bulletin of the Atomic Scientists, the organisation behind the Doomsday Clock. The clock is meant to signify how close we are to destroying our world with dangerous technologies of our own making.

The above demonstrates the level of concern relating to unregulated AI advances; however, in our industry sector the advances are still advantageous. Both NDT and condition monitoring (CM) are continuing to make advances and often key to this is the ever-increasing data processing and storage facilities. Will it lead to a reduction in people employed in our industry, or will more testing and monitoring take place as it becomes cheaper and more accurate? Sadly, we humans do have our limitations, especially with repetitive tasks. NDT 4.0 is another IT advancement that is and will continue to benefit industry and mankind and could lead to drones, robots and other vehicles performing NDT and CM tasks without human intervention. I had drafted this article prior to receiving and reading Bernard McGrath’s article ‘Drowning in data’ in the April 2023 issue of NDT News.

There was a media release dated 29 March 2023, titled: ‘‘Pro-innovation approach to AI regulation’ white paper welcomed by professional body for IT’. The article went on to describe the UK government’s launch of its ‘AI regulation: a pro-innovation approach’ white paper, intended to guide the use of artificial intelligence in the UK by striking a balance between regulation and innovation.

The proposals plan to create the right environment for AI to flourish, while building public trust.

The Department for Science, Innovation and Technology (DSIT) said that five principles should guide the use of AI: safety, transparency, fairness, accountability and contestability.

The plan would use existing regulators from various sectors instead of giving responsibility for AI governance to a new single regulator. It is hoped that this approach will result in consistency throughout the regulatory landscape and rules that can quickly adapt as technology continues to develop at speed. The proposals for regulation will focus on the use of AI, rather than the technology itself.

The UK’s AI industry is a burgeoning sector, employing over 50,000 people and contributing £3.7 billion to the economy last year; Chatbots such as ChatGPT are already mainstream.

Rashik Parmar MBE, Chief Executive of BCS, The Chartered Institute for IT, said: “AI is transforming how we learn, work, manage our health, discover our next binge-watch and even find love. The government’s commitment to helping UK companies become global leaders in AI, while developing within responsible principles, strikes the right regulatory balance.

“As we watch AI growing up, we welcome the fact that our regulation will be cross-sectoral and more flexible than that proposed in the EU, while seeking to lead on aligning approaches between international partners. It is right that the risk of use, not the technology itself, is regulated. It is also positive that the paper proposes a central function to help monitor developments and identify risks. Similarly, the proposed multi-regulator sandbox (a safe testing environment) will help break down barriers and remove obstacles.

“We need to remember that this future will be delivered by AI professionals, people who believe in shared ethical values. Managing the risk of AI and building public trust is most effective when the people creating it work in an accountable and professional culture, rooted in world-leading standards and qualifications.”

BCS will be working with its membership community to respond to the government’s consultation on the plans, which has a deadline of 21 June 2023.

Comments by members

This forum post has no comments, be the first to leave a comment.

Submit your comment

You need to log in to submit a Comment. Please click here to log in or register.

<< Back