Neural Nodes is an occasional recurring feature here on the blog; just some space for me to share links of possible interest; thoughts and suggestions for further blog posts; and general noodling around.

Ideas I want to get out of my head and into yours.

The AI year in review

Richard Waters, writing in the weekend edition of the Financial Times, goes to town on the first year of “the AI revolution”, asking “has anything changed?” According to Waters we are now well and truly over the peak hype hump, with 2024 looming up as a year of reckoning for the technology: in other words, “put up or shut up” seems to be the likely direction of travel.

EA vs e/acc (cont.)

Elsewhere in the FT, Martin Sandbu, the paper’s European Economics correspondent (and former lecturer in Philosophy at Harvard, et al) comes out swinging against the Effective Altruists (aka “EAs”; slogan: Do good, better). I found his piece on the moral metric a worthwhile if somewhat depressing read. I was reassured to an extent by his final take that EA as a guiding philosophy for the technocrats probably doesn’t matter much; but I could have done with a similarly critical comparison with the counterpoints of the Efficient Accelerationists (aka “e/accs”, and the opposite camp to the EAs in most people’s eyes).

 Above all, I felt the whole piece fell prey to the argument that Cory Doctorow and others have advanced, and which we discussed earlier: - that the EAs and e/accs build their worldviews on the same false assumption as to the actual power and potential of AI in general terms. The moral panic. It’d be helpful - at least for me - if informed and experienced practitioners such as Sandbu could extend their critiques to expose this false assumption more often.

AI regulation update

Also in the Weekend FT, Margrethe Vestager, EU competition and digital chief, has come out in defence of the proposed AI Act, in response to criticism from French President Macron that it will inhibit innovation (particularly for French-based AI endeavours, such as Mistral). The Act envisages a two-tier approach, with stricter disclosure and transparency requirements for applications targeting “sensitive” sectors such as healthcare. The draft Act also provides the first clear set of rules for governing the development of foundational models such as ChatGPT.

Vestager is reported as saying the Act:

“would not harm innovation and research, but actually enhance it ....[The AI Act] creates predictability and legal certainty in the market when things are put to use”

It’s not clear to me that the EU’s draft Act reflects exactly the same concerns as the EAs, but it seems that Macron may be very much on the side of the e/accs. Whatever divide there is will need to be crossed if the Act is to ever come into law, but at the moment is hard to see signs of any bridge-building going on. France, Germany, and Italy are said to be seeking amendments or to stop the Act altogether.

AI - the year ahead

Vestager went on to tell the FT:

"Regulation as such is not the only answer .... It creates trust in the market. Then you have the investment and of course, you want people to start using [AI technology] because only in that you can really shape it.”

The emphasis on the importance of getting this stuff into the hands of ordinary users reminded me of the FT Editorial Board’s own set of predictions for how AI will develop this year, published a few days ago. Their forecast is for three trends in particular to shape the next act for generative AI:

  • Generative AI models becoming usable on smartphones (as currently in development by Apple, as discussed back in July last year by Dan Taylor-Watt);
  • Companies using open-source AI models “to deploy generative AI safely on their own proprietary data sets for clear business use cases’; and
  • “The launch of more powerful multi-modal models, blending text, image, audio and video [which] will also extend the creative possibilities.”

Although the Board say that this will amount to “focused adoption” in contrast to this year’s “fun experimentation”, their opinion piece ends by quoting the science fiction writer William Gibson — the “street finds its own uses for things”.

Which to me (keeping Vestager’s comment also in mind) still sounds rather like the strategic approach to product development also known as:

 “Let’s just throw stuff against the wall and see what sticks. “

AI and the bad feels

From Bloomberg comes this piece by Opinion columnist Paul J. Davies, which picks up on the theme of moral panic:

Apocalypse Now? Only In Our Fevered Dreams
Doomsday fear-mongering reached a high pitch this year. Let’s try to stay focused on more immediate problems in 2024.

Davies takes aim at both the Effective Altruists and the accelerationists, putting both points of view into a more historical context of regularly recurring doomsday vibes in conjunction with revolutionary movements aimed at improving our human everyday lot.

In his view:

These are dangerous fevers of the mind that careless or callous opportunists can whip up into self-aggrandizing cults or extremist left- or right-wing politics. Such thinking also leads us to waste time and resources on fanciful concerns. The biggest issue I have with effective altruism, or with fixating on the existential threat of a Skynet-like AI, is that these become excuses not to tackle the difficulties that are right in front of us: misinformation, bias and privacy concerns; expanding energy usage and CO2 production; hollowed-out communities lacking infrastructure and jobs.
In 2024, we should encourage each other to focus on these immediate and solvable problems. There’s no time to waste succumbing to manias.

I kinda do and kinda don’t agree. But I like the effort to set the argument in some kind of wider context. Nothing new under the sun, and all that.

Given the New Year shutdown, there were no fresh headlines on what is likely to be one of the most critical AI-related stories of early 2024: the New York Times’s case against OpenAI/Microsoft for copyright infringement on a truly massive scale.

The BBC has a short video explainer of sorts with Professor Gina Neff from Cambridge University; but for the real action and to get a feel for how dense (and at times vicious) the arguments are around this case, check it out on X.

I have no idea about the (copy-)rights and wrongs being argued over here. The deal that the publisher Axel Springer has already done with OpenAI seems to be significant: a two-tier income stream covering both historical (archive) content and new output. Given my relatively recent prior experience in business development for the public service side of the BBC’s digital content and services, I can’t help wondering if there is a possible business model here (pending any outcomes from the NYT case). I’d love to know who, if anyone, is working on the strategy required to position the BBC as a provider of quality public service content to the Large Language Models (LLMs), and beyond.

Generative AI, meet GenAI

Is “GenAI” the next generational cohort? After GenX and GenY? You have to admit, as puns go it’s not bad. And you heard it here first.

If AI is as powerful as some people say; a truly profound technology “as important or more than fire and electricity,” according to Alphabet’s Sundar Pichai (seriously? - ed.) then isn’t it likely that the first generation to grow up under its auspices will reflect aspects of this in their beliefs and behaviours?

A hunch I have is that the widespread application of AI could lead to a generation of tomorrow people who will have a much more nuanced view of what is real and true, and what is not; and different ideas about what “reality” and “the truth” actually means to them in practice.

What the further implications of that might be is anybody’s guess.