The European Union (EU) recently announced rules to regulate the development of artificial intelligence (AI) tools and use cases. The UK is now following the suite by setting out principles to do the same.
According to a report by news agency Reuters, these new principles set out by the Competition and Markets Authority (CMA) are designed to prevent generative AI models like ChatGPT from being dominated by a handful of major tech companies.
The upcoming rules will also hold tech majors like Google, Microsoft and OpenAI among others accountable for the potential negative consequences of AI without stifling innovation.
‘Cannot take positive future for granted’
CMA chief executive Sarah Cardell said that AI technology has real potential for the technology to turbocharge productivity and make everyday tasks easier but a positive future could not be taken for granted.
She said there was a risk that the use of AI could be dominated by a few players who exert market power.
“That’s why we have today proposed these new principles and launched a broad programme of engagement to help ensure the development and use of foundation models evolves in a way that promotes competition and protects consumers,” she was quoted as saying.
Britain’s global AI safety summit Britain is set to host a global AI safety summit in six weeks wherein it will seek views from leading AI developers such as Google, Meta, OpenAI, Microsoft, Nvidia and Anthropic, as well as governments, academics and other regulators.
Meanwhile, the US is also looking to frame rules to regulate AI. The White House recently invited seven companies — Google, Microsoft, Meta, Amazon, OpenAI, Anthropic and Inflection — and they announced voluntary agreements on responsible development of AI technology.