From unknown complexity to managed risk

Governments everywhere appear keen to realise the benefits to society that are potentially on offer via the widespread application of artificial intelligence technologies. Most governments also recognise the need to minimise any likely risks arising (which are frequently claimed to be "existential") and are therefore seeking to put new regulatory regimes in place before anything bad actually happens to us.

This gives rise to the classic regulatory dilemma: how to protect the public without inhibiting further growth and innovation?

The current UK government has come out firmly in favour of the role of regulation in general as being to promote growth and innovation across the board. To that end, its draft plan for regulating AI emphasises its pro-innovation credentials. However, the government's proposal also recognises public trust and confidence in AI as being essential for the successful uptake of these new technologies.

The UK's new regulatory regime for AI will therefore be based on five key principles, namely:

  • Safety, security and robustness.
  • Appropriate transparency and explainability;
  • Fairness;
  • Accountability and governance; and
  • Contestability and redress.

Details

"The UK has therefore laid out plans for building this essential trust via a proposed regulatory regime based on five core principles:

Safety, security and robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed

Transparency and "explainability": organisations developing and deploying AI should be able to communicate when and how it is used and explain a system's decision-making process in an appropriate level of detail that matches the risks posed by the use of AI

Fairness: AI should be used in a way which complies with the UK's existing laws, for example on equalities or data protection, and must not discriminate against individuals or create unfair commercial outcomes

Accountability and governance: measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes

Contestability and redress: people need to have clear routes to dispute harmful outcomes or decisions generated by AI"

Although the set of proposed AI principles does refer to the need for accountability, there is little detail in the government's plans to date on how this is to be achieved, other than requiring the existing regulators (OFCOM, OFGEM, OFWAT and the like) to extend their remits to cover the new technologies.

This is despite concerns that these bodies may already be overstretched, under-resourced, under-skilled, and altogether unable to cope with these new responsibilities: UK rules out new AI regulator - BBC News

There is a further problem

Just like a high-rise building, any AI product is a combination of a complex set of systems, made of many different components from a variety of suppliers. At present, there is widespread disagreement and confusion about how to regulate for such complexity, and even what risks to regulate for.

It's an issue which is recognised across the AI sector. As the Ada Lovelace Institute puts it in their response to the government's draft proposals:

"Foundation models (like GPT-4) are often the building blocks behind specific technological products that are available to the public (like Bing), and themselves sit upstream of complex value chains. This means that regulators may struggle to identify whether a harm from a product is best remedied by the deployer of the tool, or if responsibility should live with the upstream foundation model developer."

- Regulating AI in the UK

It therefore may be reasonable to speculate that in the event of an AI "emergency", this combination of unknown risks and uncertain access to likely sources of responsibility - the absence of a single source of truth - could prove to be disastrous.

Just as it was at Grenfell.

So, here's an idea: why don't we consider a regulatory approach to AI safety which is based on the same clear principle that is already widely used in the UK to turn unknown complexity into manageable risk, namely:

💡 Responsibility for risk is owned and managed by those who created it.

In this scenario, the information about an AI product that allows someone to understand it and to keep it safe will always be managed in ways that ensure the information is accurate, easily understandable, can be accessed by those who need it and is up to date.

 In other words, a Golden Thread for AI.

According to this ideal, anyone with an interest in the safety and trustworthiness of that AI will have ready access to all the relevant information, as and when needed: a citizen concerned about apparent bias in the service provided; a software engineer who needs to know the safe way to update some code; an emergency response team that needs to know the best way to contain an AI gone rogue.

Could this approach help in terms of minimising the likelihood of unintended negative consequences leading to the potential for this technology to cause actual harm to real people?

Learning from experience

A readily accessible single source of truth would go a long way in terms of meeting the government's fundamental requirement that the UK's regulatory regime for AI helps build and maintain public trust in the safety and integrity of these tools. Without deterring future growth and innovation in the technology itself.

The UK government recently hosted the first global summit on AI safety. Twenty-eight countries signed the Bletchley Declaration. Although a sceptic might well dismiss the Declaration as little more than "a firm commitment to keep talking", the consensus view across a range of opinions seems broadly favourable.

Of course, it remains to be seen what, if any, impact the summit's outcomes will have on the UK's approach to AI safety.

But do we need to go through this whole discussion process when the UK already has such an effective and well-understood principle for health and safety firmly in place? Building on prior experience, the government could act now to extend the same principle which serves to keep us safe in the physical world to also protect us in the digital realm.

In fact, based on policy proposals put forward by Lorna Woods and Will Perrin the UK's Online Safety Bill now imposes a duty of care on the social media platforms in respect of their users. This approach is not without its critics, particularly those concerned about the potential negative impact on freedom of speech; but it does provide a useful digital precedent and point of comparison when considering the regulation of AI.

I'm not suggesting that such an approach would resolve all the concerns around AI safety. There will always be good and bad actors. But a regulatory regime based on this simple and proven principle of risk and responsibility managed via the paradigm of the Golden Thread, might just make it easier to distinguish one from the other.

Warning: naive outlook ahead

There was palpable public anger at the lack of accountability for basic housing safety after Grenfell. Perhaps this same lack of a culture of responsibility for AI could one day result in something bad happening. Which, in turn, might make people very angry indeed.

Of course, I know that it's most unlikely that this approach will be adopted (at least before something bad actually does happen to us).

It seems idealistic; far too simple and naive. There are too many vested interests and competing points of view.

Just as there were at Grenfell.

It's easy to get confused. The cliche still applies: we find it hard to learn from experience.

In fact, sometimes things are not quite as I remember them.

AI & The Golden Thread Part 3: memory playing tricks