Every day brings us some new news about artificial intelligence and machine learning. Impressive claims of progress are mounting up. Intelligent agents will do all the tedious work for us. An AI chatbot will soon be your new best friend.

Meanwhile, the naysayers and doom-mongers are also making themselves heard. AI is coming for our jobs. The energy demands from all the new data centres will wreck the green agenda. It’s the AI apocalypse!

On closer examination, it seems that the stories from all sides are often hyperbole; sometimes dangerously erroneous; and occasionally deceptive.

Even so, there is some genuine good news amongst all the dross. AI does seem to be making helpful advances in certain fields, including health care, food production, and a more inclusive society.

AI feels too important to ignore; but how does one learn to identify the pros from the cons? When do we get a moment to consider the wider implications? How do I find an answer for the Who? What? Where? How? and Why? of AI?

What I’m missing is a framework to help answer these questions.

Get ready to inhale

In the real world, whenever we need to quickly ascertain whether something is good and safe, we often check how it smells. The metaphorical “sniff test” is well established as an informal means of assessment. It’s a quick and easy way to determine the likely plausibility, credibility, or legitimacy of a thing. If it passes the sniff test, we are probably good to go.

Here are my five personal sniff tests for artificial intelligence.

Sniff 1: Look who's talking

Everyone has an angle, right? Venture capitalists like Marc Andreessen proclaim that “AI will save the world”. Professionally sceptical folk like Gary Marcus regularly cast doubt on such assertions. Modern-day Cassandras, including the acclaimed “godfather of AI,” Geoffrey Hinton, now predict that his invention may wipe us out within the next few decades.

It feels like there are so many people trying hard to sell me something; convince me of something; or tell me what to think about something. Sniff 1 is a moment to find out a little more about the messenger. Who is giving off the sickly smell of PR, and who has chosen to wear a more optimistic fragrance? What is the likely motivation, and how do I feel about it?

And why are they all men?

Sniff 2: What type of AI? And what for?

Artificial intelligence technologies come in many guises: machine learning, computer vision, large language models, and neural networks, to name a few. For us non-computer scientists, it can be hard to understand the distinctions. Headline writers and many journalists tend to file everything under the one general heading of “AI.” I appreciate the need for journalistic economies of language, but I’m not sure it’s always entirely helpful.

This week, the UK government announced plans to use AI to transform the public sector. The announcement, or at least the way it has been reported, highlights the problem. Artificial intelligence is going to “unleash national renewal,” or so we’re promised. But how?

Aside from the very real technical, financial, and legal obstacles to be overcome, there are a whole lot of ethical issues arising as well. For the government’s plans to succeed, it will need to work hard to establish sufficient public trust in this new way of working. However, previous government policy may have left many of us feeling quite alarmed:

Can Rishi Sunak’s big summit save us from an AI nightmare? - BBC News.

More recently, some of us will have witnessed the potentially dangerous foibles of a generative AI hallucination:

  • Apple Intelligence told us that Rafa Nadal is gay. He’s not (sadly 😉).
  • Luke Littler hadn’t won the darts competition when Apple Intelligence claimed he had.
  • Apple Intelligence also informed us that Luigi Mangione, the man accused of killing UnitedHealthcare CEO Brian Thompson, had killed himself. He hadn’t.

Is this the same “AI” that’s going to help deliver our public services more efficiently? Fix the potholes in my street? Or diagnose my cancer scans?

I certainly hope not.

We need a bit more definition on what type of AI is being applied in each circumstance, and why. ChatGPT will not fix the potholes, but a machine equipped with computer vision (i.e. one that can “see”) might. Of course, that in turn raises ethical concerns such as surveillance and privacy, and these will need to be addressed.

But maybe some of our fears are irrational. Perhaps we’ll overcome the challenges, given time. Informed oversight can help build trust. We should proceed carefully if we don’t want to risk throwing the “social progress through technology” baby out with the “existential threat to humanity” bathwater.

Sniff 2 reminds us that it’s what a technology is used for that can really stink. If we’re being asked to trust “AI,” we’re going to require more detailed scrutiny than we’re currently getting from many of our news outlets. What kind of artificial intelligence is involved? What can (and can’t) it do for us? How will it be managed, and by whom?

Any decent scent will explain itself over time. If something smells off, we should take heed accordingly.

Sniff 3: Fad or Phenomenon - Where Are We in the Technology Hype Cycle?  

Nikolaus Otto invented what became known as the internal combustion engine in 1876. He intended it to be a fixed, stationary device used to power factory machines as an alternative to the dominant technology of the time, the steam engine. The idea of mounting it on wheels as a mode of transport never occurred to him. His innovation attracted some initial excitement but was also dismissed as a fad. No one fully predicted the long-term social and economic shifts—urban sprawl, globalisation, and the rise of the oil economy—that this new technology would trigger.

When Tim Berners-Lee invented the World Wide Web in 1989, he intended it to be a tool for collaboration within scientific and academic communities. Outside those communities, his innovation was also dismissed as a fad. Few people expected it to disrupt any of the dominant technologies of the time (publishing, broadcasting, commuting to work…). No one predicted that the Web would transform nearly every aspect of modern life.

The rationale for both these outcomes is best defined by Roy Amara, founder of the Institute for the Future at Stanford University. His insight, now known as “Amara’s Law,” states that:

“We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”
Amara’s Law and Its Place in the Future of Tech

According to Amara’s Law, the long-term impact of a new technology generally takes us by surprise. Otto never anticipated the motor car. Berners-Lee never anticipated that his innovation would give rise to TikTok and PornHub.

Of course, not every new discovery becomes a fad, and not every fad becomes a phenomenon. Not so long ago, some people predicted that something called the metaverse was going to be the Next Big Thing. Now, many of those same people are predicting that AI really is the future. Or that the future really is AI. Or something like that.

It is the classic gold rush. A new discovery drives excitement as people rush to make their early fortunes. The initial excitement (aka “hype”) quickly peaks and then gives way to a more realistic understanding of the long-term impact of any new innovation. It is Amara’s Law that drives this process; in high-tech circles, business consultants often refer to it as the Gartner technology hype cycle.

Maybe it is still too early in the current AI hype cycle for anyone to state with confidence that all this new technology really is a phenomenon and not a fad:

AI mania may be getting ahead of itself. Fewer than one in 20 workers say they use AI daily. Fewer than one in 10 US companies have incorporated AI into their operations. Ruchir Sharma in the Financial Times: top 10 trends for 2025

Just as in any gold rush, the only businesses making real money at the moment are the ones selling the picks and the shovels. Of course, they are the same people who are making most of the noise about the benefits of AI; talking up the advantages that are supposedly available to us now (or more often, “coming soon”).

Perhaps we should trust our sense of smell to guide us through the hype cycle.

Sniff 3 is the time to stop and consider: does this new claim about AI reek of just another short-term fad? Or might it actually be the start of something that will be of long-term significance?

Sniff 4: How About That? (Jevons' Paradox at work)  

Here’s something to puzzle over: as a technology becomes more efficient, it seems intuitively reasonable to expect that we will use less of it, rather than more, right? And if that technology can do more for us at less cost, then in turn we should expect to spend less on the resources needed to operate it. Right? My new car goes further and faster on one tank of petrol than my old one; therefore, I’ll spend less on fuel, right?

Unfortunately, in most such cases, our intuition is wrong.

In fact, experience shows that as a technology becomes more efficient, its reduced cost and increased capability often lead to greater demand for its use. Instead of conserving resources, we tend to consume more, as efficiency makes the technology more accessible and applicable to a wider range of circumstances. Now that my car can get me from A to B faster than ever before, I’m going to visit the far-flung C, D, and E as well, even though it will inevitably cost me more in petrol to do so.

As AI becomes ever more capable, our intuition might lead us to believe we will need fewer and fewer resources to keep it running.

However, the opposite is happening. The growth of AI has led to a boom in constructing new data centres, significantly increasing the demand for electricity. The rise in energy consumption directly impacts the climate change agenda and our efforts toward carbon neutrality.

This is Jevons’ Paradox at work:

As a technology improves (becomes more efficient), it will eventually end up consuming more resources, not less.
Jevons' Paradox.

The rationale behind Jevons’ Paradox deserves another blog post. Right now, I’m interested in how it might play out concerning artificial intelligence. What impact might the growing demand for increasingly capable AI really have on the resources required to deliver it?

There are predictions galore that such tools will replace human intelligence by, for example, usurping a skilled and experienced workforce. It might even seem quite reasonable to expect this.

Of course, ChatGPT undoubtedly “knows” more than I do; however, as we found in Sniff 2, it can’t tell whether it has given me the right answer or not. That’s still a question for me to decide.

In the absence of accurate results reliably repeated over time, generative AI can never be entirely trustworthy. Tools such as ChatGPT still require a human being with suitable awareness and experience to validate their responses by providing expert oversight and judgment.

Maybe any widespread deployment of that type of AI might actually lead to an increase in demand for human expertise? Rather than devalue human acumen, perhaps the ongoing performance improvements in AI might paradoxically increase the market value of our own intelligence?

Sniff Four is an opportunity to smell the breeze and consider how Jevons' Paradox may come into force: is that the stench of the planet burning as our AI tools gobble up more fossil fuels? Or is it the acrid top note of a different set of expectations going up in smoke?

Sniff 5: So why should I care?

And so we come to Sniff 5, the final opportunity in this battery of tests to consider what the implications of each new claim about AI might mean for me — I mean, us.

As per Sniff 2, it’s what our AI will be used for that really counts. At one end of the debate is the prospect that AI will help to feed us: How AI Can Help End Global Hunger. At the opposite end of the spectrum, some argue that AI will inevitably end up feeding on us: Expert shows AI doesn’t want to kill us, it has to.

I care because I’m interested in the outcome, whatever happens. Whether I’m going to be served lunch or served as lunch, either way I’d still quite like to know what else is on the menu and why.

Of course, to be effective, AI requires data; and lots of it. There are plenty of ethical concerns about data: the way it’s gathered, the way it’s managed, and how it gets used. These concerns encourage us to think about the possibility of unintended negative consequences with the potential to cause actual harm to real people.

But what might that smell like?

Sniff Five is when we can consider some of our unanswered questions; from the practical:

  • How will this thing actually make any money?

To the ethical:

  • Will this technology be used to remove or reinforce barriers to a more equitable society?

And the more philosophical:

  • At what point might we consider an AI to be sentient? And in that case, what rights should it have, and who gets to decide?

The practical and ethical issues are more immediate, so they definitely deserve closer inspection. The outlier philosophical questions are entertaining to consider from a theoretical point of view (I don’t believe we’re remotely close to having to decide on any of those right now).

In any case, we need to manage our expectations at all times. A helpful result from Sniff Five might be the whiff of potential danger ahead: a signal to proceed, but with caution.

And finally…:

I’m not saying these five tests are complete, definitive, robust, and unchanging. They are definitely messy and unfinished; just a partial list of headline issues I'm choosing to consider when evaluating each new claim about AI. These tests won’t all apply to the same extent in every instance. My understanding of each one is likely to evolve over time. But for now, this feels like a framework I can work with.

It would make me happy to hear if they are useful for you as well.