A couple of recent announcements have me reaching for the first of my Five Sniff Tests for AI.
The headlines proclaim that the boys are back in town under Trump 2.0:
The Tech Oligarchy Arrives - The Atlantic
In The Donald's brave new American world, it is the Billionaire Boys Club of Elon Musk (X), Jeff Bezos (Amazon and the Washington Post), Sam Altman (OpenAI), and Mark Zuckerberg (Meta) who are on the up-and-up. The men who have, for the last few years, been pressing hard to ramp up the pace of (American) AI development.
It all feels pretty consequential for the future of artificial intelligence.
My Sniff Test 1 for AI asks us to consider: "Who's talking? And why?"
To which it now seems necessary to add: "And why are they all men?" Because these AI leaders are not just any ol' boys.
This is the "go faster" crowd, sometimes also known as the Effective Accelerationists (or e/acc, for short).
Although much of the current AI-related hype revolves around generative AI ("genAI") tools such as ChatGPT, the stated goal of the e/acc crowd is to develop Artificial General Intelligence (AGI).
According to OpenAI's website, AGI is a technology that promises to:
…elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.
In the company's view:
AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity.
Some proponents of AGI expect that the new technology will possess capabilities, capacities, and characteristics equivalent to human consciousness.
OpenAI's Sam Altman invites us to believe that this could result in:
"a glorious future…which will benefit all humanity."
In support of all this ambition, the new Trump regime has dispensed with the previous administration's more measured approach. Unprecedented further investment in expensive computer chips (CPUs) and costly data centres is promised. Ethical guidelines and safety guardrails are out; the race to win the AI arms race on behalf of the USA is on.
In the absence of any meaningful checks and balances, it feels reasonable to ask: what are the chances that this really will be a "glorious future" for everyone? Or is it more likely to benefit only a few of us?
The first whiff that Sniff 1 is giving me is the smell of potential danger ahead. The tech bro fraternity are not about to admit that generative AI is inherently flawed; that its results may always require human supervision. Their pursuit of AGI is a distraction from that inconvenient truth while they make themselves even richer. Their newfound support for Trump is a situation that reeks.
All I sense so far is the possibility of short-term gain for the few now, and the probability of long-term pain for the many later.
And in this case, "the few" really are all men. I've been looking around, and I can't find anyone who identifies as female who also identifies as e/acc. It would be exciting to meet one. If you're out there, please let me know!
After all, it's not as if AI didn't already have a few issues in this respect
The lack of figurative female representation in generative AI is something I noted a few months ago
Take, for example, the robot imagery featured on this blog. I ask the image generator for a picture of a robot doing a certain thing or in a certain situation; is it just me, or do the images of the robots themselves all come back as discernibly "male"?
If the answer is Yes, then query as to Why?
The reason, of course, is the historical invisibility of women in the data and imagery on which generative AI has been trained.
In the same post, I also pointed out that:
Information technology (in fact, any technology) in and of itself has no gender.
However, the statistics and ongoing trends all point to a continuing gender imbalance in technology-related activities generally:
- Between school and university, the number of young women taking STEM subjects drops by 18 per cent, and a further 15 per cent between university and the workplace.
- Women make up 50 per cent of the UK workplace, but less than 15 per cent of STEM jobs.
- While more women than ever are part of the workforce worldwide, they remain under-represented in leadership roles. In the UK, for instance, women occupy more than a third of board positions in FTSE 350 companies, but gender parity in senior management remains elusive.
In specific sectors of the digital economy, such as gaming, there is:
An ongoing struggle for visibility, representation, and equality in e-sports.
A history of women's esports in the UK - Esports News UK
Again, this disparity and inequality in the world of high tech is in line with the ongoing lack of equality in the wider world. It comes as no surprise that there is ongoing uncertainty around the impact that generative AI might have on women's lives.
Do we think that:
AI might be bad for women
According to Mercer's 2024 Global Talent Trends Study:
Since women hold more of the jobs expected to be disrupted by AI, they will likely be more adversely impacted. For example, the administration, healthcare, education, and social services industries all have high proportions of women and are among the sectors most likely to experience widespread job losses due to AI and automation.
Or perhaps we should we expect that:
AI may help address the existing gender bias
Writing in the Financial Times, Lorena Goldsmith suggests that AI could be the key to unlocking leadership opportunities for women:
AI brings hope for women to gain greater representation in senior management--both by fighting gender bias via mass-scaled AI and by using it to develop their leadership skillsets.
Of course, there are women in leading roles in some of the AI companies, and of course, there are writers and commentators on AI who also happen to be female. Concerted efforts are being made to visibly redress the balance : 12 women shaping AI | Nesta.
Nevertheless, how often do we hear directly from these individuals as part of the daily discourse about AI and how it is going to benefit society? It's not that often.
There were at least two female tech leaders at the Trump inauguration: Ruth Porat, Google's Chief Investment Officer, and Lynne Parke, Executive Director of the President's Council of Advisors on Science and Technology.
Were they highlighted in the reporting? No, they were not. But at least we do know the important stuff, i.e. what the wives and girlfriends of the tech bros were wearing.
Perhaps it doesn't actually matter?
If whoever is making the point is honest, fair, and ethically minded in their outlook, then why should their gender (or their dress sense) be an issue at all?
Of course, it shouldn't.
And yet, a recent UN report predicts that:
If current trends continue, AI-powered technology and services will continue to lack diverse gender and racial perspectives, and that gap will result in lower quality of services, biased decisions about jobs, credit, health care, and more.
Artificial Intelligence and gender equality | UN Women -- Headquarters
Sniff Test 1 is now giving me the rotten odour of doubt: so much for AI making the world a better place for everyone. The lack of diversity is consequential. And not in a good way.
I must be naive. Somehow, I assumed and expected that an artificial intelligence would be better (more knowledgeable and less compromised) than the information sources available to me at present. Maybe that is no longer quite so likely to be the case.
All of which leaves me asking: who will be held accountable for any negative impact resulting from an AI that supplies low-quality information and supports biased decision-making?
Look who's talking
Generative AI has yet to have its Cambridge Analytica moment.
In 2018, Facebook's Mark Zuckerberg (Facebook's CEO) and Sheryl Sandberg (who was then COO) were forced to respond to the revelation that personal data from Facebook users had been improperly harvested and exploited for political purposes. Both were deemed accountable for the systems they developed. The scandal focused global attention on why the organisational culture they created had given rise to serious concerns about data privacy, consent, and the influence of social media on democracy.
It doesn't seem fair to attribute full responsibility to either Zuckerberg or Sandberg; both were clearly acting on their own volition, and each had agency when it came to shaping and refining Facebook's business model.
However, there are claims that Zuckerberg took a back seat in responding to official inquiries about the scandal, in favour of Sandberg's more agreeable public profile as a female business leader and a feminist. The suggestion is that Zuckerberg, in effect, used Sandberg as a human heat shield to protect himself from detailed public scrutiny. Facebook's Sheryl Sandberg under fire as scandals mount - CBS News
"He's got to be so macho"
The real nature of the Zuckerberg/Sandberg dynamic may never be fully understood. However, suspicions were revived just recently when Zuckerberg reportedly "threw Sandberg under the bus" as he announced a plan (part of his blatant pivot Trump-wards) to inject more "masculine energy" into the workplace.
All the supposedly "feminine energy" concepts, such as diversity, equality, and inclusion, are out. Zuckerberg has not actually explained why he believes all that "feminine stuff" was a drag on the development of AI at Meta. With Trump in the White House, what we do know for sure is that, in terms of Meta's organisational culture, more of the good ol' boy stuff is in.
For me, Zuck's position is ridiculously offensive; or do I mean offensively ridiculous?
I'm not sure. It could well be both. Maybe I'm lacking in "masculine energy”? In any case, this stinks.
Perhaps I need someone to mansplain it to me.
Or ask a woman. Here's Lorena Goldsmith again:
The onus is on Big Tech to prioritise ethics in AI--applying multiple and diverse perspectives to gain the highest benefits--alongside design and development. Otherwise, AI will eventually face its own crisis of legitimacy, in the same way every patriarchal model now does.
Gender bias comes in many guises
It seems to me that, given these recent developments in its (male) leadership, generative AI may indeed now face its own crisis of legitimacy. Discrimination and bias are counterproductive. The risk is that the lack of diversity in AI development is likely to inhibit its value and relevance to society at large. Any tool based on generative AI will only be of real benefit when its results can be trusted. But how can we trust it if its outcomes are likely to be biased concerning half the population? Or constrained to meet the whims of a dictator?
Large Language Models Reflect the Ideology of their Creators
Sniff Test 1 asks us to consider "Who's talking? And why?"
Building trust requires accountability. Someone needs to explain what went wrong, why, and what can be done to fix it.
My suspicious mind is wondering. When, or if, genAI has its Cambridge Analytica moment, or the time comes for a public explanation of why genAI tools have failed to deliver a better society for all, will that also be the moment when the likes of Zuckerberg, Musk, and co. fall back on some female voices? Voices laced with sufficient "feminine energy" to cover their respective arses?
This gives rise to some horrible, haunting questions that have left me feeling queasy:
- Is that female spokesperson for AI there because of who she is, what she knows, and what she has accomplished in her own right?
- Or is she there because she is a woman?
Rocking on the see-saw of the double standard is enough to make anyone feel seasick. When was the last time a man was asked if he was a figurehead or a focal point--except perhaps Nick Clegg?
Sniff Test 1 - the results:
When it comes to assessing any new claim about AI, my first step now is to consider not only who is telling me about the new thing (and why), but also how that person, whether male or female, came to be in that position.
I want to know: are they speaking in their own capacity, or are they deflecting attention elsewhere?
And I agree with Lorena Goldsmith: AI will only make real progress in terms of being trusted and trustworthy when the focus is on addressing its ethical issues.
Of course, the men promoting the development of AGI keep promising us that a technology capable of matching or even exceeding our own intellectual capacities is just around the corner.
In order to achieve their personal financial and political goals, they need us to believe that one day it will be an AGI chatbot, equipped with a synthetic equivalent of human consciousness, telling us what to think about everything.
That somehow, as part of that development, all of the ethical issues surrounding gender, bias, and more will have been overcome.
Do we really believe that will happen? Or are we asked once again to accept that the illusion is real?
But why are they all men?
On its own, Sniff Test 1 can't answer that question. We already know why the world is so screwed up. But for me, the focus on who is telling me what (and why) is still helpful.
Our AI leaders (of any gender) do not need to spend yet more money on better CPUs. They do not need to invest in more energy-hungry data centres in order to make their products more trustworthy. They don’t need to demonstrate their machismo in the workplace.
Our AI leaders can establish public trust in AI as a new source of social value by being accountable for its limitations. They don’t need to distract us from the difficult ethical issues with the promise of a substitute for human consciousness.
They just need to demonstrate the value of a human conscience.
Member discussion