chatgpt, ai, artificial intelligence, regulations, uk government, us government, social media marketing,

UK Government is looking to regulate AI, but it won’t be easy

As was surely to happen eventually, the buzz around the newly mainstream AI software has resulted in governments wondering how they can regulate it. There are a lot of ethical questions around the likes of ChatGPT and other AI software and, as yet, no rules on how to proceed.

The UK Government has said that AI contributed £3.7 billion to the UK economy last year, but critics fear it could threaten jobs or be used for malicious purposes.

The BBC reported: “As AI continues developing rapidly, questions have been raised about the future risks it could pose to people’s privacy, their human rights or their safety. AI could also be used to create and spread misinformation. As a result many experts say AI needs regulation.”

A new white paper from the Department for Science, Innovation and Technology has been released which outlines a proposed set of rules for general-purpose AI. It leaves the regulation of AI up to each individual government sector (such as the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority) on how it affects their relevant issues.

But Simon Elliott, partner at cybersecurity firm Dentons has said that it won’t be easy to regulate AI as the concerns that consumer groups and privacy activists will have over the risks to society “without detailed, unified regulation.”

He is also worried that the UK’s regulators could be burdened with “an increasingly large and diverse” range of complaints, when “rapidly developing and challenging” AI is added to their workloads.

Related
Related Posts

Get the latest affiliate news to your inbox

Join 1000’s of digital marketers who want to keep up to date with Affiliate Marketing trends across all verticals. Sign up to our weekly Newsletter and stay updated with all our industry news, insights and interviews.

Podcast
Partner Directory