UK on AI: The UK government today published a white paper outlining a “pro-innovation approach” to regulating artificial intelligence (AI), which includes no dedicated watchdog for AI and no new legislation, but rather a “proportionate and pro-innovation regulatory framework”.
The development happened as technology leaders, including Elon Musk and Steve Wozniak, signed an open letter urged leading artificial intelligence-based business facilities to pause their plans for six months, saying that recent advances in AI present “profound risks to society and humanity.”
The UK whitepaper seemed to put the onus on responsible AI, basically shifting any liability arising out of abuse of the technology on the user. And, oh, there was no mention of ChatGPT.
The government has also established a £2-million sandbox trial for testing artificial intelligence regulations. The approach contrasts with the European Union’s risk-based framework, which includes up-front prohibitions on certain AI users.
UK on AI, and its implications
The artificial intelligence industry currently employs 50,000 people in the UK and contributed £3.7bn ($5.1bn) to the UK economy in 2021.
According to the whitepaper, the UK Department for Science, Innovation and Technology (DSIT) has introduced the five principles of transparency, robustness, explainability, fairness, and accountability for regulating the use of artificial intelligence in the country.
UK on AI, hopes to see the technology put into more widespread use, as it can bring potential benefits to many parts of society, such as aiding doctors in identifying diseases and helping farmers make more sustainable and efficient use of their land.
However, the government also recognizes the risks posed by artificial intelligence, particularly around privacy, bias and safety. The report has also warned that there needs to be a pathway for redress if someone is the victim of a harmful AI decision.
“To ensure we become an AI superpower, though, it is crucial that we do all we can to create the right environment to harness the benefits of AI and remain at the forefront of technological developments. That includes getting regulation right so that innovators can thrive and the risks posed by AI can be addressed,” Michelle Donelan, UK Secretary of State for Science, Innovation and Technology, wrote in the report.
“These risks could include anything from physical harm, an undermining of national security, as well as risks to mental health. The development and deployment of artificial intelligence can also present ethical challenges which do not always have clear answers.”
The report does not mention any steps or plans on regulating tools like ChatGPT.
Halt AI, technology big shots urge
Meanwhile, hundreds of the biggest names in technology, including Elon Musk and Steve Wozniak, has signed an open letter issued by the Future of Life Institute, a US-based non-profit organization, to support a pause on generative artificial intelligence development.
They argue that there is a need to better understand the potential risks posed by the emerging technology before continuing its advancement.
The signatories propose that artificial intelligence labs and independent experts should collaborate during this pause to create a set of shared safety protocols for advanced artificial intelligence design and development, which they believe are currently lacking.
The open letter was published less than a week after OpenAI launched GPT-4, the successor to ChatGPT, and has since gained significant traction, with more than 1100 signatures recorded by Wednesday.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects,” read the letter.
“Therefore, we call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in.”
The push for a pause comes after significant changes that occurred in the past six months, leading to an arms race between Big Tech companies such as Google and Microsoft, who are looking to incorporate advanced AI tools into everyday productivity tools.
Billionaire entrepreneur Elon Musk has long been a vocal critic of artificial intelligence, warning of its potential dangers and calling for increased investment in AI safety research.
Signatories to the open letter include Apple co-founder Steve Wozniak, Israeli historian and author Yuval Noah Harari, and Pinterest co-founder Evan Sharp, among others. A small number of staff from Google, Microsoft, Facebook, and DeepMind have also signed.