It seems like a fair question.
Governments everywhere are striving to appear keen when it comes to realising the benefits to society promised by the ongoing rapid development in artificial intelligence and machine learning. In the UK there is high level recognition that public faith in the trustworthiness and safety of these tools is essential in order to realise these benefits.
To that end, the government (always keen to be seen as taking a "pro-innovation" approach) has proposed five key principles to govern the regulation of AI safety and alleviate concerns about the supposed "existential threat" posed by the emergent AI sector. The principles themselves don't appear to be controversial, but as we have already seen, there is a significant lack of detail in how these principles are to be implemented, given the complex, multi-part nature of the AI tools themselves.
Moreover, rather than establish a new single regulator for AI (or empower an existing one), the government intends to share responsibility for the safe regulation of AI across existing regulators (e.g. the Health and Safety Executive, the Equality and Human Rights Commission and the Competition and Markets Authority). These bodies will be tasked with identifying approaches to regulation that support the way AI tools are actually being used in their specific sectors.
At present, the government also expects that existing legislation will be able to cover any possible scenarios, rather than introducing any new laws.
This approach has already been criticised as "piecemeal". According to some, there are "dangerous gaps" in these proposals:
"Initially, the proposals in the white paper will lack any statutory footing. This means no new legal obligations on regulators, developers or users of AI systems, with the prospect of only a minimal duty on regulators in future." - Michael Birtwistle, Ada Lovelace Institute [via TelecomTV]
The UK's current approach to AI regulation is also markedly different from other countries and regions. The European Union is implementing a prescriptive, top-down, rules-based approach; the USA appears to be taking a more bottom-up patchwork quilt of executive branch actions; and China has put the ideological concerns of the Party at the forefront of all considerations regarding AI.
However, not even the “big men” in this space are in agreement. Leading AI developers such as OpenAI's Sam Altman are insistent on the need for regulation, "imploring" the US Senate to do just that. Others, such as venture capitalist Marc Andreessen, decry the supposed risks, consigning such talk to the category of:
"moral panic."
- Why AI Will Save the World, Andreessen, Horowitz.
At the same time, other voices maintain that any attempt at regulation is doomed to failure:
"The question of a body like the United Nations regulating AI is like suggesting the UN regulate [image editing app] Photoshop.....There are tens of thousands of individual developers who are building on these innovations. Regulation of them is never going to happen."
- Jimmy Wales, co-founder of Wikipedia, BBC News
Adding to this rather heady mix, there's a critique based on understanding the seemingly "godlike powers" of artificial intelligence as part and parcel of an ongoing false and self-serving argument:
"Let's call out this narrative for what it is: a sleight of hand designed to make us look away from the questionable decisions being taken by Big Tech."
- Mhairi Aitken, Ethics Fellow at The Alan Turing Institute; Letters to the Editor, Financial Times, April 21, 201
Rachel Coldicutt, in an article for Medium On understanding power and technology, goes further.
Her argument is worth quoting at length:
"The current "existential threat" framing is effective because it fits on a rolling news ticker, diverts attention from the harms being created right now by data-driven and automated technologies and it confers huge and unknowable potential power on those involved in creating those technologies. If these technologies are unworldly, godlike, and unknowable, then the people who created them must be more than gods; their quasi-divinity transporting them into state rooms and on to newspaper front pages without need to offer so much as a single piece of compelling evidence for their astonishing claims. "
From this point of view, understanding the "existential" threat of AI becomes a matter of media literacy:
"... No one will ask what the words really mean, because they don't want to look like they don't really understand ... And yet, really, it's a just a narrative trick: the hidden object is not a technology, but a bid for power. This is a plot twist familiar from Greek myths, cautionary tales and superhero stories, and it's extremely compelling for journalists because most technology news is boring as hell. ...
Moreover:
...Computer science is a complex discipline, and those who excel at it are rightly lauded, but so is understanding and critiquing power and holding it to account. Understanding technologies requires also understanding power; it needs media literacy as well as technical literacy; incisive questioning as well as shock and awe. If there is an existential threat posed by OpenAI and other technology companies, it is the threat of a few individuals shaping markets and societies for their own benefit. Elite corporate capture is the real existential risk, but it looks much less exciting in a headline."
All the while the UK government continues to big-up the potential threats:
Can Rishi Sunak's big summit save us from AI nightmare? - BBC News.
And it seems that the UK's somewhat laissez-faire approach to regulation may already be earning commercial reward and recognition, for example, Google's investment in a new Centre for Human-Inspired AI in partnership with Cambridge University.
Is this a clear win for industrial policy in action? Or a shining example of "elite corporate capture" at work?
It may even be that success in this field cannot and will not go unpunished (even by our nearest and dearest). President Biden revealed his Executive Order on "Safe, Secure and Trustworthy Artificial Intelligence" on the eve of the UK AI Safety Summit, thereby somewhat spiking Mr Sunak's big day.
We always hurt the ones we love, right?
So. Where are we?
It's easy to get lost in the maze. Myths and mystification abound. AI tools are complicated systems which the vast majority of us do not understand. There's a perceived lack of a culture of responsibility around this technology.
However, as Hackett points out, the same was also true about Grenfell, and the associated technologies used to create and manage other high-rise buildings across the UK.
Recommendations from her report on Building a Safer Future are now formal legal requirements. The Golden Thread approach to responsible information management is mandated by the new Building Safety Act 2022 which came into force in October 2023.
Consider this
If the Golden Thread approach for turning complicated unknowns into manageable risks can be implemented in order to regulate one branch of engineering (the high-rise construction industry), perhaps it might also be worth considering when it comes to regulating another (advanced software engineering - which, lest we forget, is all artificial intelligence really is).
It might well seem that it's completely nuts to make this analogy, to compare the health and safety of high-rise buildings to the risks posed by generative AI. But for me, it feels important to demystify what really is just software. Although they are complicated structures, it helps to keep in mind that both a residential tower block and an AI tool really are just products of different branches of engineering.
Along those lines, if a building can be thought of as just nuts and bolts held together by a few other things, a generative AI product is just bits and bytes, held together with some advanced statistical logic. The language of AI even borrows some of its descriptive metaphors from the construction industry. Products such as Chat GPT are often described as "foundational”; or as a "building block" from which other tools and services can be assembled. Amazon and Meta are positioning their respective approaches to AI as providing "platforms", something on which other products may be built.
An analogy too far?
Of course, there may be degrees of scale which will need to be accounted for. Much of the mystique around AI stems from the great speed and scope in which these tools can operate, along with their apparent ease of use. I’m not advocating for complacency; if a poorly regulated system of public housing can kill people, then a poorly regulated AI system may well have the potential to do the same.
The analogy may still seem like crazy talk. But at least it might be a good place to start.
Now this is crazy talk
Macbeth, dir.Orson Welles, 1948
There are a lot of (fast-)moving parts to this discussion. Creative solutions are required. Perhaps there is an art to devising a new regulatory regime, just as much as there is science.
And what do we know about making art?
Of course:
"The art of making art is putting it together."
Member discussion