I'd never heard of the Golden Thread and only came across it by accident while looking for something else online. Taking a break from reading about artificial intelligence, I was searching online to find details on something I could only half-remember from my undergraduate degree course on medieval English literature: the concept of:

The Great Chain of Being

This is the once widely-held belief that there is:

"a hierarchical structure of all matter and life, thought by medieval Christianity to have been decreed by God. The chain begins with God and descends through angels, humans, animals and plants to minerals."

Wikipedia

The classic example of what happens when this Great Chain is broken is Shakespeare's Macbeth, when the Thane of Cawdor breaks the natural hierarchy by murdering his King, thereby unleashing forces that drive his wife to insanity, and ultimately lead to his own demise.

When searching for this via Google I must have misremembered "great chain" as "golden thread".

Rather than taking me to the comforting world of EngLit, the subsequent search results brought me back to Grenfell and introduced me to Dame Judith Hackett's recommendations for a more robust health and safety regime for the UK's high-rise buildings.

From there I went on to understand the concept of the Golden Thread as both a metaphor and a paradigm for information management, used in a range of different areas of knowledge:

  • a student's record of learning
  • a personal medical record
  • a ledger of asset ownership.

Then I started getting curious. And then I had a daft idea.

Always a dangerous moment.

At the risk of sounding crazy, it occurred to me there might be something of a common set of values underlying the five principles of AI safety which the UK government has outlined in its draft approach to regulating artificial intelligence; and the ten requirements which Hackett has already laid out for the Golden Thread for managing the safety of our high-rise buildings.

It's perhaps hardly surprising, given that there is an element of "motherhood and apple pie" in both approaches. But what if we did a more detailed read across from one to the other? 

What if we tried to apply some of the post hoc lessons learned following the tragedy at Grenfell, to the ex-ante stance of the UK government as it attempts to create a regulatory framework that will lessen the chances of an AI-related disaster actually occurring?

What would that comparison look like?

Being intellectually lazy and feeling more than a little "meta" about the whole thing, I asked Chat GPT 4 to help.

This was the result:

"Mapping the five AI regulatory principles to the ten principles of the golden thread for building safety involves aligning the objectives and intentions behind each set of principles. Here's how they correlate:
Safety, security, and robustness (AI) to Accurate and trusted, Secure, Residents feeling secure in their homes (Golden Thread):
Ensuring AI functions securely and robustly correlates with the need for accurate and trusted systems in building safety. Both require risk management to maintain security and instil a sense of safety in residents.
Transparency and "explainability" (AI) to Understandable and consistent, single source of truth (Golden Thread):
AI systems should be transparent and their decisions understandable, similar to the golden thread's need for consistency and a single source of truth that is easily interpreted by stakeholders.
Fairness (AI) to Support for culture change, relevant and proportionate (Golden Thread):
AI's adherence to laws regarding equality ensures fairness, aligning with the golden thread's principle of supporting cultural change towards fairness and ensuring actions are relevant and proportionate to the risks.
Accountability and governance (AI) to Accountable, Support for culture change (Golden Thread):
The need for clear accountability in AI overlaps with the golden thread's principle that those involved in building safety are accountable for their actions and decisions, fostering a culture of responsibility.
Contestability and redress (AI) to Simple to access (accessible), Longevity/durability and shareability of information (Golden Thread):
Providing routes for contesting AI decisions parallels the golden thread's accessibility principle, ensuring information is durable, shareable, and accessible for scrutiny or redress.
In summary, each AI principle can be seen as a facet of the golden thread principles, with a common theme of ensuring safety, accountability, and fairness in both domains. It's about creating systems—whether digital or structural—that are reliable, transparent, equitable, and maintainable, with clear avenues for addressing issues and continuous improvement." (my emphasis)

Perhaps my daft idea was not so crazy after all.

Of course, I don't subscribe to the beliefs which once inspired the Great Chain; there is no God, and there is no hierarchy in nature, just different points of view. But I think we all wonder sometimes about the possible interconnectedness of all things. There does seem to be a common theme here; the principles of the Golden Thread do have their counterparts in the proposed approach to AI regulation (thank you, Chat GPT4).

Perhaps my daft idea was not so crazy after all.

However, many people maintain that any proposal for regulating AI is crazy talk.

AI & The Golden Thread Part 4: isn't this just crazy talk?