Similarities may be there if we look

At time of writing, prospects for effective AI regulation may seem more remote than ever. Although the UK's AI Safety Summit was noted for the mutual exchange of warm fuzzies and earned the Prime Minister widespread praise (even from a usually critical source), behind the careful stage management things were reportedly a little different.

According to a BBC summary, the summit talks were largely shaped by divisions linked to various competing interests at national and commercial levels. The Financial Times reported that the summit "exposed underlying tensions" between the main players, largely stemming from different attitudes around the role of open-source models of AI and their possible impact on the development of (profitable) business models. 

Prior to the summit, Michael Birtwistle, associate director of AI law and regulation at the independent research body, the Ada Lovelace Institute, said he believed the UK's proposed strategy could result in potentially dangerous gaps in the government's approach; in his words, it is:

"underpowered relative to the urgency and scale of the challenge."

It remains to be seen what further progress to address that challenge is being made.

Meanwhile, the EU is said to be struggling to land its "flagship" rules-based regulatory framework for AI. Although a successful compromise agreement may have been reached by the time this blog is published, it still seems worth noting that (according to Bloomberg):

"...extensive, late-into-the-night discussions underscore how contentious the debate over regulating AI has become, dividing world leaders and tech executives alike as generative tools continue explode in popularity. The EU — like other governments including the US and UK — has struggled to find a balance between the need to protect its own AI startups, such as France's Mistral AI and Germany's Aleph Alpha, against potential societal risks …
…. That has proven to be a key sticking point in negotiations, with some countries including France and Germany opposing rules that they said would unnecessarily handicap local companies. …
… EU policymakers had proposed a plan that would require developers of the type of AI models that underpin tools such as ChatGPT to maintain information on how their models are trained, summarize the copyrighted material used and label AI-generated content. Systems that pose "systemic risks" would have to work with the commission through an industry code of conduct. They would also have to monitor and report any incidents from the models." 

The regulatory dilemma still persists: how to protect the public without inhibiting growth and innovation We need to have a clearer idea of what we are regulating for.

Recent events have hardly helped. The debacle over governance at OpenAI exposes what the headline writer at Bloomberg described as "the Charade of AI Accountability".

The report goes on to claim this proves that:

"the cult of the founder is alive and well in Silicon Valley."

Subsequent reporting and discussion have exposed a dichotomy of divergent world views, perhaps similar to the conflicting priorities behind the scenes at the UK AI Safety Summit; and, as noted above, between some member countries (France, Germany) and the EU.

In headline terms this can be seen as a split between those who support a more open, ethically-minded approach to AI development; and those who support full-on uninhibited progress on innovation; the “Do-Gooders” versus the “Go-Fasters”.

It seems now that the OpenAI board had become irrevocably split between the “effective altruists” (catchphrase: “Doing good, better); and the advocates of a moment known as effective accelerationism (sometimes abbreviated to e/acc), which advocates for unfettered technological progress.

According to Wikipedia,

“Central to effective accelerationism is the belief that propelling technological progress at all costs is the only ethically justifiable course of action. The movement carries utopian undertones and argues that humans need to develop and to build faster to ensure their survival and propagate consciousness throughout the universe.”

All of which gives internet commentator Cory Doctorow the opportunity for an(other ) entertainingly polemical rant.

In his view, the dichotomy between the effective altruists and the e/acc movement is based on a shared mistaken belief in the imminent and supposedly emerging “godlike” capabilities of AI (and by extension the supposed prowess of the tech creators themselves).

It's a similar argument to the critique of that part of the narrative around AI which can be seen as being both self-justifying and self-aggrandising,. As is often the case with Cory’s writings, for me his rhetoric feels somewhat reductive and lacking in nuance; but he does bring a lot of rich information and some colour to the debate.

Somewhere in this unholy mix, players such as Meta (with major AI aspirations of its own) and Amazon (always happy to spoil someone else’s party)  are turning up the heat with their support for open-source models of AI, in competition with proprietary offers from Microsoft, Google et al. 

Meanwhile, evidence continues to mount, showing that AI and the wider digital tech industry cannot regulate itself:

As Carsten Jung at the Institute for Public Policy Research, (speaking before the recent UK AI Safety Summit) puts it:

"Regulators and the public are largely in the dark about how AI is being deployed across the economy. But self-regulation didn't work for social media companies, it didn't work for the finance sector, and it won't work for AI. We need to learn lessons from our past mistakes and create a strong supervisory hub for all things AI, right from the start.”

The ongoing rapid pace of technology development puts further pressure on the need for transparency and accountability. Alarms are already sounding regarding the rumoured capabilities of OpenAI's Q* product, which is yet to launch:

"When AI systems start solving problems, the temptation to give them more responsibility is predictable. That warrants greater caution...
... Companies that outsource more to AI also risk baking gender and racial stereotypes into their work systems. Most firms must choose from a handful of so-called foundation models from OpenAI, Google or Amazon.com Inc. to upgrade their infrastructure, and such models have been called out for not only showing entrenched bias toward people with disabilities and racial minorities but also for being highly inscrutable. OpenAI and, by extension, Microsoft Corp. have refused to disclose details that independent researchers need to determine how biased their language models are." - Bloomberg

As ever, we need to stay alert to the data ethics risk of unintended negative consequences leading to possible actual harm to real people.

Some encouraging signs?

Tech journal The Register reported that the UK government used the recent King’s Speech to announce its plans for regulating self-driving cars:

"The proposed legislation … would put "safety and the protection of the user at the heart of our new regime and makes sure that only the driver – be it the vehicle or person – is accountable, clarifying and updating the law," the government promised."

In this scenario, either the human driver or the AI software which was in control of the vehicle at the time it caused an accident is clearly to be held responsible. 

In other words: ensuring that responsibility for risk rests with whoever created that risk. 

Where have we heard that before? [Link to Part 1]

I'm no regulatory expert; but could this be an example of the principle of risk and responsibility which underpins the 1975 Health & Safety Act being extended into the realm of AI governance and regulation?

It does seem to go against the government's stated intention of effecting their proposed new regulatory regime for AI without the need for any new legislation. It may be the very thin edge of a very fat wedge. Perhaps a small sign of inevitable things to come?

The recent out-of-court settlement in the USA, which led to the closure of the notorious Omegle chat site, is also interesting:

“Alice's case is a legal landmark, as most social media lawsuits in the US are dismissed under a catch-all protection law called Section 230, which exempts companies from being sued for things that users do on their platforms.
Alice's attorneys used a novel angle of attack called a Product Liability lawsuit, arguing that the site was defective in its design.
"This was the first case where the platform could be held liable for the harm from one user to another and that's largely because of our argument that the product design made the type of harm so foreseeable," says attorney Carrie Goldberg, who led the case with co-counsels Naomi Leeds and Barb Long.
Product Liability cases are a growing trend, with dozens of similar suits launched in the last year against platforms such as Instagram and Snapchat.” - BBC News

Good governance for AI means that companies are not to be allowed to escape responsibility for the harm their technology may cause. More focus on product liability in IT may be a means to that end.

Good corporate governance may even be a point of marketing differentiation; for example Anthropic [link], an AI company funded by both Amazon and Google. It is said to be positioning itself “very much as the responsible AI pioneer”, according to Jared Kaplan, its chief science officer:

 "We think there should be a race to the top for safer AI and more ethical AI"

According to Bloomberg,

“Anthropic's pitch sounds similar to the one Apple uses for its smartphones: that it cares deeply about ethics and responsibility.”

But the ethical standard we are striving for must mean more than just making sure that tech creators think about the consequences for society of their work.

Putting it together

The core principle of health safety regulation is that responsibility for risk rests with whoever creates the risk. It seems to me that it could be applied to AI just as effectively as it has been used to protect us in most other aspects of our daily lives.

But we need to go beyond just that. Rachel Caldicott calls for a renewed focus on the importance of digital media literacy. Progress on that is essential if we are to make well-informed decisions about how best to benefit from all the new things 

With all these disparate parts, it would help if we could put it all together in ways which most people can understand; and in ways that everyone will benefit from. And that requires knowing more about who is responsible for what.

 In other words, a Golden Thread for AI.

It may be some way off. It may require something bad to happen. We may need to put it together bit by bit

But until then, who doesn't love a camp version of a Sondheim show tune when discussing draft regulatory regimes?


Bernadette Peters is Putting It Together (for the 66th Academy Awards).