This post is the first in a series of blogs discussing my image, Allegory 3. It's part of a wider series of images about AI. For some background to the discussion, please see Every Picture Tells A Story
I began this set of posts with a complaint. Keeping up with the rapid pace of change in the field of artificial intelligence was a struggle. Every new development only led to further questions. I was feeling overwhelmed. Now, in this final allegory of AI, it looks like the quest for understanding has ended here. The sombre colours and the long shadows suggest a kind of twilight. Time has passed since we last checked in. There is a feeling of weariness. It's been a long day.
What do we see? A wooden box, similar to the one in Allegory Two, but now unlocked. Keys have been scattered across the tabletop, tried and then discarded. There’s a vase of flowers and a small bowl to the side, holding an apple (again) and some grapes (again again). The flowers are starting to wilt with time, and the fruit has begun to wizen.
The scattered keys suggest that it's taken many attempts to open the mysterious box. We were expecting to find treasure. Hoping that unlocking the box and revealing its secrets, would lead us to the possibility of understanding. But the unlocked box contains only keys. A lot more keys. Is there a feeling of being underwhelmed with this result? Perhaps even a sense of disappointment. There still seem to be more questions than answers. What's really going on here?

True Story
Thirty years ago, I was living in Edinburgh. One day, I had to get up early to catch a commuter train to Glasgow for a meeting. Something to do with Virtual Work 1.0. It was a miserable morning; cold, grey and damp. The Scots word for it is "dreich". Needing some money for the journey, I stopped at a cash machine up on The Bridges, overlooking Waverley station.
I'm often quite befuddled first thing in the morning. It can take me a while to remember who and where I am. On this particular day, I was running late, with only minutes left to catch my train. Of course, there was already a queue of other early risers ahead of me. I stood and waited. Anxiety about the day ahead was mounting. I wanted to be home in bed.
Finally, it was time to enter my PIN. Something distracted me from the task at hand. Some agitprop activists had plastered a sticker on the front of the cash machine:
"Workers rights for robots!"
In my confused early morning state, I was so diverted by the idea of the ATM as some kind of sentient e-worker that I momentarily forgot my surroundings. What would "rights for robots" actually mean? Unions? Representation? Agency?
These were exciting ideas on a cold, dull morning. After a few moments lost in thought, I remembered my mission for the day. As I took my money and turned toward the station, I heard laughter from the queue which had formed behind me. Then I realised what I'd done.
I'd just said, "Thank you." Right out loud. To the cash machine.
Does the key fit the lock or does the lock fit the key?
Allegory One is an assessment of the current state of generative artificial intelligence (genAI). It's an illustration of some of the weaknesses in the tools we have now. Allegory Two is a forecast (based on an analysis of trends and data) that recent technological improvements will soon lead to a new generation of AI devices, more fit for purpose and more commercially successful than those we have now. Allegory Three is different. It's a prediction based on personal hunch and speculation.
One way of interpreting this image is as a reference to the legend of Pandora's Box. Has our quest for knowledge - in this case, the ongoing search for better models of artificial intelligence - got us into trouble, just as some have feared?
Perhaps unlocking the mysteries of what constitutes "intelligence" has unleashed evil into the world. Of course, in the context of AI, rather than death, famine and pestilence, we are warned to fear disaster in the form of data-driven bias happening at scale; widespread job losses; and even the possibility of the machines ultimately taking control.
However, in this image, there's no visible sign of such troubles occurring. Nothing truly bad seems to have happened as a result of our ongoing discoveries in AI. Allegory Three is a prediction that as we make progress towards understanding the mysteries of intelligence the worst that will occur is that we will face further challenges. In the twilight of the working day what we'll discover is not the secret to intelligence itself, but just more questions.
Where are the locks that fit these keys?
Artificial General Intelligence: hit or myth?
According to ChatGPT4, the term "myth" can be understood in two ways:
1. a narrative that shapes our understanding of the world and our place within it; or
2. a widely held but false belief.
Pandora's Box is an example of the former. It's a Category 1 warning that the gods must be obeyed. It may even be a foundation myth for later legends of control and power, such as the Biblical fruit of knowledge in the Garden of Eden.
Of course, there are no gods; just things we don't understand yet. Such as human intelligence. There is as yet no single agreed definition of what constitutes "intelligence". Nevertheless, considerable academic and commercial activity is currently focused on developing technologies that will mimic or even exceed human intellectual capabilities (whatever they are deemed to be). This new and improved technology has been called "Artificial General Intelligence", or AGI.
According to Tech Target, AGI is:
The representation of generalized human cognitive abilities in software so that, faced with an unfamiliar task, the AGI system could find a solution. The intention of an AGI system is to perform any task that a human being is capable of.
So in theory:
the performance of these systems would be indistinguishable from that of a human. However, the broad intellectual capacities of AGI would exceed human capacities because of its ability to access and process huge data sets at incredible speeds.
Moreover:
“When AGI is achieved, it will be a pivotal moment in human history, rivaling the invention of fire and the wheel.” — Nick Bostrom, via Medium
OpenAI is the parent company of ChatGPT. Its Charter defines AGI as:
"highly autonomous systems that outperform humans at most economically valuable work."
A definition which seems immediately problematic, as Bloomberg's Shirin Ghaffary at Bloomberg:
But what counts as a “highly autonomous system,” or for that matter, “economically valuable work?” Is unpaid domestic labor like child rearing, for example, economically valuable?
As Ghaffary explains, no one seems to know what AGI actually is:

It seems that while the pursuit of AGI is real, we don't know what we are looking for.
Let the good times roll
One thing most of the competing definitions of AGI have in common is the Utopian promise of an end to scarcity (at least in terms of scarcity of human knowledge and capabilities).
As Sam Altman, co-founder of OpenAI puts it in his latest update to the company's mission statement:
AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity.
Tech financier Marc Andreessen goes even further, proclaiming that such advancements in artificial intelligence "will save the world". His vision for the AGI era reads almost like a return to Paradise, a brand new Garden of Eden for us poor, fallen things:
Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable,...Every person will have an AI assistant/coach/mentor/trainer/advisor/therapist that is infinitely patient, infinitely compassionate, infinitely knowledgeable,,,,Every scientist will have an AI assistant/collaborator/partner that will greatly expand their scope of scientific research and achievement..."
It's quite an alluring and compelling vision, right? Good times for everyone!
"But does it scale?"
Integral to this AGI school of thought is the assumption that replacing the human expert (a non-scalable resource) with artificial expertise (which is portrayed as being highly scalable; itself another questionable assumption) will make everything somehow "better".
I can see the advantages, to a point - but getting there will require us to make some balanced and judicious choices. There have been claims that AGI in the context of our medical health will be like having a million doctors in your pocket, available for consultation at any time of day or night. My inner sceptic with a nasty cough is therefore very much looking forward to a future of AI-fuelled hypochondria.
Or perhaps it would be better to have guaranteed access as required to one single, well-trained and trusted source of advice? We could even call that trusted source something like "the family GP", or some such. Crazy talk, I know.
However, there are a whole lot of reasons why AGI may not scale as readily as some people would have us believe. These range from technical limitations (bigger is not always better); resource implications (data centres gobble up water and electricity in very large amounts); and unresolved ethical dilemmas ('Your values or mine?").
Getting closer to the idea of the sentient Edinburgh cash machine is going to take some major progress; but as we'll see in Part Two, some fundamental questions remain unanswered.
Meanwhile, I can't shake the feeling that the promise of a complete end to the scarcity of human capital and the subsequent benefits arising (as envisioned by Altman, Andreesen et al.), is just another Category 2 myth, an attempt to divert attention while making a fortune at our expense.
Let the good times roll, indeed; but who for, exactly?
"The illusion is real..."
Next: Allegory Three (part 2): what's in the box?
Member discussion