The UK will ‘do its own thing’ on AI regulation – what does that mean?


Jacques Silva | Noor Photos | Getty Images

LONDON — Britain says it wants to do “its own thing” when it comes to regulating artificial intelligence, suggesting it may differ from the approach taken by its leading Western nations.

“It’s really important that the UK does its own thing in terms of regulation,” Feryal Clark, the UK’s government minister for artificial intelligence and digital, said in a CNBC interview broadcast on Tuesday.

She added that the government already has “good relationships” with artificial intelligence companies such as OpenAI and Google DeepMind, which voluntarily open their models to the government for safety testing.

Clark added: “It’s important that we consider safety from the outset of model development… which is why we will work with the department on any safety measures.”

Minister says UK can do its own thing on AI regulation

Her comments echoed comments made by British Prime Minister Keir Starmer on Monday that Britain “now has the freedom to regulate in a way that we think is best for the UK” after Brexit.

Starmer said: “There are different models around the world, there is the EU approach and the US approach, but we have the ability to choose the model that we think is in our best interests and we intend to do that.” Answering reporters’ questions after the announcement A 50-point plan to make the UK a global leader in artificial intelligence.

Differences with the United States and the European Union

However, the UK has so far not confirmed the details of the proposed AI safety legislation, instead saying it will consult with industry before proposing formal rules.

“We will work with the industry to develop this project and move forward with it in line with what we said in our manifesto,” Clark told CNBC.

Chris Mooney, partner and commercial director at London-based law firm Marriott Harrison, told CNBC that while the EU is moving forward with its AI bill, the UK has taken a “wait-and-see” approach to AI regulation.

Mooney told CNBC via email: “While the UK government has stated it has a ‘pro-innovation’ approach to AI regulation, our experience working with clients is that they find the current position uncertain and therefore unsatisfactory. ”

One area where Starmer’s government has made a public commitment to reforming AI rules is copyright.

At the end of last year, the UK launched Consultation to review national copyright framework Evaluate possible exceptions to existing rules for AI developers to use the work of artists and media publishers to train models.

Businesses face uncertainty

Sachin Dev Duggal, CEO of London-based AI startup Builder.ai, told CNBC that while the government’s AI action plan “shows ambition,” its implementation without clear rules “borders on recklessness.”

“We’ve missed critical regulatory windows twice — first with cloud computing and second with social media,” Dugar said. “We can’t make the same mistake with artificial intelligence because the risks of artificial intelligence are exponentially greater.”

“Britain’s data is our crown jewel; it should be used to build sovereign AI capabilities and create a British success story, not just to power overseas algorithms that we cannot effectively regulate or control,” he added.

Details of Labour’s AI legislation plan are Originally expected to appear in King Charles III’s speech at the opening of the British Parliament last year.

However, the government has only pledged to establish “appropriate legislation” for the most powerful AI models.

“The UK government needs clarification here,” John Buyers, international head of artificial intelligence at law firm Osborne Clarke, told CNBC, adding that he had heard from sources that consultation on a formal AI safety law “is pending.” release”.

“By releasing consultations and plans piecemeal, the UK misses the opportunity to get a comprehensive picture of the direction of its AI economy,” he said, adding that failure to disclose details of the new AI safety law would lead to uncertainty for investors.

Still, some in the UK tech community believe a looser, more flexible approach to AI regulation may be right.

Russ Shaw, founder of advocacy group Tech London Advocates, told CNBC: “It’s clear from recent discussions with the government that there is a lot of effort being put into AI safeguards.”

He added that the UK was well positioned to pursue a “third way” in AI safety and regulation – “sector-specific” regulation for sectors as diverse as financial services and healthcare.



Source link

  • Related Posts

    SEC sues Musk, accusing him of failing to properly disclose Twitter ownership

    Beata Sales | Noor Photos | Getty Images The U.S. Securities and Exchange Commission (SEC) Elon Musk On Tuesday, the billionaire was accused of committing securities fraud in 2022 by…

    Palestinians and Israelis dare to hope for Gaza deal

    British Broadcasting Corporation Sanabel said she hoped the ceasefire would last “for a long time – for the rest of our lives” Palestinians and Israelis are cautiously optimistic about a…

    Leave a Reply

    Your email address will not be published. Required fields are marked *