This post is the second in a series of blogs discussing an allegorical image I created using an AI. It's part of a wider series of images about AI. For some background to the discussion, please see Every Picture Tells A Story

In Part One of this update I suggested that the concept of Artificial General Intelligence (AGI) might just be a myth, given that there is, as yet, no agreed definition of what it might be. Today's post continues the idea of the unlocked box as a metaphor for this condition. But what's the significance of all these keys?

It seems that we really don't know what we are looking for. There is widespread disagreement on how to find AGI. There's also no consensus on how it should be managed.

I've blogged before about the sectarian differences between the “effective altruists” (catchphrase: “Doing good, better); and the devotees of effective accelerationism (sometimes abbreviated to e/acc), which advocates for unfettered technological progress.

Both camps lay claim to owning the moral high ground regarding the development of artificial intelligence. But as ever, it's a case of exactly whose morals will prevail, and why. Layer onto that a further complication in terms of the merits of "open-source" versus "closed" (proprietary) development strategies, and the result is a never-ending disagreement.

Alongside that is the compelling argument that the seemingly "godlike powers" of artificial intelligence and AGI are being espoused as part and parcel of an ongoing false and self-serving strategy to avoid public scrutiny.

Of course, much of the hype conceals the reality that, as per Allegory One of this series, much of our current AI just does not work very well right now.

Nevertheless, commercial forces and the diktat of politics mean that we find ourselves yet again in a technology arms race, in which various players are fighting to become dominant. To propel that struggle, some kind of threat - an enemy or a bogeyman - is required; something that will fuel popular concerns while distracting attention from what may really be going on.

Enter the singularity

Alongside the promise of an end to the scarcity of human capital, what many of these competing visions of AI superiority have in common is the idea of "the singularity", otherwise known as:

a hypothetical future point in time when technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.

The singularity has its origins in both theoretical physics and science fiction. It is seen as both an opportunity and a threat to humanity:

....A singularity in technology would be a situation where computer programs become so advanced that AI transcends human intelligence, potentially erasing the boundary between humanity and computers. The singularity would also involve an increase in technological connectivity with the human body, such as brain-computer interfaces, biological alteration of the brain, brain implants and genetic engineering.
From TechTarget

As with AGI itself, there is no precise definition of what the singularity is, let alone what it might mean in practice. All we have right now is a range of opinions which seem variously intended to inspire or to frighten us, as per this video from the Financial Times:

AI: a blessing or curse for humanity? | FT Tech
Artificial intelligence is playing an ever-increasing role in our lives. But will this prove to be a blessing for humanity, or have we created a monster? We talk to leading futurists and experts to find out the impact they believe AI will have on our personal potential, jobs, and even safety

It's (not) alive!

Some proponents of AGI posit the singularity as a potential existential threat to humanity; a time when it will be impossible to distinguish humans from machines; and where the machines are capable of taking control.

In support of this apparent and appalling threat there have been credible claims of AI showing signs of being sentient; sometimes referred to as "emergent" AI.

Suddenly the Large Language Models which underpin generative AI tools such as Google's Bard (now retired) and OpenAI's ChatGPT appeared to have taught themselves new skills. Researchers at both Google and Microsoft found the machines were suddenly capable of expressing themselves in languages they weren't originally trained on. Had the AI learnt how to learn? Were these tools showing signs of ... life?

Unfortunately, the reality is much less exciting. As a report by Vice goes on to reveal, these findings simply confirmed that "you are what you measure." What the corporate-backed research claimed to have discovered (because they went looking for it) turned out to be "a mirage". In this instance, University of Stanford researchers were able to show that evidence in support of emergence very much depends on how one asks the questions. It seems that a sentient AI (such as my Edinburgh cash machine) remains a distant dream.

However, all this research reaffirms an important point:

💡 We need to keep a constant critical eye on the claims that are being made, and on whoever is making them.

In its darkest form, the threat of the singularity feels like the bogeyman hiding under the bed, behind the closed door or inside the locked box. It's The Terminator.

However, in the absence of any credible supporting evidence, there is a problem: the singularity is just another proverbial unicorn: lots of people say they have heard of it, but no one has ever seen one. It's a convenient way of avoiding scrutiny. It's a Category 2 myth, i.e. "a widely held but false belief". A legend, serving to deceive and control.

Which is unfortunate. And confusing.

Sometimes we are told that AI delivers genuine and important benefits to society - e.g. improvements in the early detection of breast cancer.

AI may also be about to level the playing field when it comes to the global distribution of resources such as intellectual capital:

Could AI transform life in developing countries?Optimists hope it will ease grave shortages of human capital:

Could AI transform life in developing countries?
Optimists hope it will ease grave shortages of human capital

At other times we are fed these myths about the speculative dangers of AGI and the singularity. Journalists repeat the hyperbole regarding the prospect of widespread job losses without also considering the underlying issues.

They say "existential threat"; we say "fundamental questions"

I use AI tools a lot. Chat GPT 4 has enhanced my personal productivity by augmenting my existing skill set with some of the many things I can't do for myself. It's helped me to write software code; it has created most of the imagery for this blog; and it proofreads my writing. As a research tool, it has extended my knowledge and awareness of certain issues in less time and far more effectively than any simple internet search. It's just that I know it's far from perfect; which, in turn, means I'm a sceptic when it comes to some of the claims being made, particularly on the subject of AGI.

Of course, I like putting the "me" into "meta". So I had to ask the AI itself about all this. According to ChatGPT:

The notion of Artificial General Intelligence (AGI) presents a philosophical conundrum on multiple levels, intertwining with deep questions about consciousness, ethics, identity, and the future of humanity itself. AGI is not just a technological challenge but a profound philosophical puzzle. It forces us to re-examine fundamental questions about consciousness, ethics, identity, and the future of human society. The philosophical discourse surrounding AGI is crucial for guiding its development in ways that are ethically sound and beneficial to humanity as a whole.

A bit of to and fro- with the machine came up with examples of these "fundamental questions". To paraphrase a few:

  • The Large Language Models (LLMs) which power generativeAI (genAI) are trained on vast amounts of third-party intellectual property. Should the copyright holders be reimbursed for the use of their work? If so, how?
  • If we create machines that can think, reason, and understand at or beyond human levels, does this imply that these “non-biological” entities possess consciousness or sentience?
  • If an AGI can have experiences, should it have rights?
  • How do we determine moral responsibility in actions taken by or with AGI? Will I get off lightly if I say "The AI made me do it"?
  • Who will be accountable for any unforeseen consequences leading to actual harm to real people?
  • What is the value of human purpose and the unique value of human creativity, emotion, and intelligence in a world where these features can be readily replicated and at scale?
  • Can we control an AGI so that it always acts in ways that are beneficial to humanity? What values does it uphold, and who gets to decide? How will we handle conflicts of interest between AGIs which are developed according to different systems of belief?

What's in the box?

upload in progress, 0
AllegoryThree: till life with box and keys - ChatGPT4

Allegory Three is a prophecy: every time we unlock one mystery in the search for AGI, there will be more questions like these to answer. Further mysteries to unlock. That's why the opened box contains nothing more than keys. A lot more keys. We're going to need them.

We're going to need all the help we can get

Thinking about these questions takes me back to my encounter with the supposedly sentient cash machine. If we're ever going to achieve anything close to AGI, these "philosophical conundrums" need to be resolved. Identifying and correcting for bias requires some hard choices. Given the present state of the culture wars, it is hard to see any signs of progress. Right now feels like a difficult time for making such decisions. As a Bloomberg commentator recently noted:

"The once-clear distinction between facts and values is under assault from all sides."

It seems easier to create a smokescreen based on putative threats from the singularity than it is to find the answers.

The Boo Radleys: What's in the box?

Next time: why having so many unanswered questions may not be such a bad thing.

1,670 words