Home / Comment / In brief

Learning language is an inherently ethical process – and that includes AI

Learning language is an inherently ethical process – and that includes AI

Madeleine Pennington on the ethical dimensions of AI as language learning, and the need for responsible regulation. 31/05/2023

“The AI boom is here”, declared CNN, as Nvidia (whose chips are used to power generative AI systems) released their latest, extremely positive quarterly sales outlook last week – and promptly saw their share value increase by nearly 30%.

Developments in AI are gathering pace, and many of them are good news: again in the last week, we heard that AI tools can now predict if cancer will spread, cure paralysis, and discover new superbug–killing antibiotics. Yet this rapid revolution has a dark underbelly that is already having destabilising consequences across society – whether it is children being advised how to sleep with much older (adult) partners, the increasing sophistication of AI–generated images and targeted algorithms threatening to undermine trust in politics entirely, or potentially unacknowledged implications for military strategy. Last week also saw Downing Street acknowledge the existential threat posed by AI for the first time, and this week experts including the chief executive of OpenAI itself described AI as an “extinction risk”. An AI boom indeed.

Perhaps nothing captures both the promise and danger of AI so much as Elon Musk’s “Neuralink” – a brain implant being promoted as a clinical aid for those suffering from brain injury in the first instance, but an attempt to achieve “symbiosis with artificial intelligence” in the longer term. Musk has publicly advocated an “if you can’t beat them, join them” approach to AI, and hopes Neuralink will allow humans to continue to control developments in machine learning from the inside, as it were. Again, human trials of Neuralink were approved last week. The pace of change is dizzying. But while Neuralink is promoted as a (literally) progressive attempt to empower the vulnerable, its ultimate ethical implication is a rejection of human limits – ironically, driven in part by a lack of faith in our ability to regulate the technology we are rushing to create without pause. This is not just a technological breakthrough but a reimagining of what it means to be human, and it is born out of a forced hand. In the words of the American novelist and farmer Wendell Berry, “the next great division of the world will be between people who wish to live as creatures and people who wish to live as machines”.

These are just some of the impacts of AI already coming into view, but many are still unpredictable. As Tristan Harris and Aza Raskin observe in their recent presentation, The AI Dilemma, the rate of change is now double exponential, because all fields in the development of machine learning have been synthesised into the same underlying goal: the increased sophistication with which machines can learn, mimic and decode language. Text, images, sound, and even DNA can all be translated into forms of language, making the applications of this increased sophistication potentially endless. Moreover, the release of large–language models into the public domain is effectively a mass training ground for these models to be refined and improved.

Discerning the signal under the noise is perhaps impossible in such a new and wholly transformative field, but a focus on language is at least instructive. As the former Archbishop of Canterbury Rowan Williams once noted in a series of reflections on the cultural changes of the late 20th century, human development is also fundamentally based on language learning. But humane societies have generally protected a “latent” phase whereby the complexities and implications of this language are learned without risk – namely, childhood. Williams writes:

“Part of [the process of language learning] is play; because to learn language is to discover, by trial and error, what I can seriously be committed to when I open my mouth, what I’m ready to answer for. This is something I cannot begin to do with intelligence or confidence unless I am allowed to make utterances that I don’t have to answer for. We do not treat children as adult speakers whom we expect to take straightforward responsibility for what they say.”

Even in 2000, Williams diagnosed “impatience” in a society that was no longer protecting this phase, but instead expected children to grow up and behave like responsible adults as quickly as possible. To draw an analogy, then, how should we characterise a civilisation that not only tests and teaches its machines in the real world (with very real consequences for politics, social trust, cyber security, and even warfare) but creates new counter–technologies precisely to match this rapid language learning, because it has already relinquished the possibility of adequate regulation?

Nobody could deny the exciting possibilities of machine learning at its best. But if the world is the new playground, how impatient are we being now?

More than anything, Williams’ observation underlines the inherently ethical and cultural nature of language learning – whether human or machine. As such, it cannot simply be delegated to technologists (no matter how responsible or otherwise they are as individuals) but is the moral responsibility of the whole society. Far from releasing new and untested technologies into the public domain for fun, then, real “parenting” here means a far wider public conversation about the uses of AI we think permissible, including time for robust regulation to be implemented before the speed of new developments make such consultation obsolete.

Many are already arguing that it is too late – that regulation of such a complex, fast–moving and lucrative field is impossible – or at least that, given the international allure of machine learning, regulation in the West would only enable geopolitical rivals to take the strategic lead. And it is true that while there are viable routes to regulation, including the EU’s proposed AI Act, it will be years before many of them have impact. Nonetheless, if we treat AI as the profound revolution it is, recognising its massive implications at every level of human society, no effort is too small. The scale of the task only strengthens the argument for urgent and robust action; the speed of change means it really is better late than never. The challenge is not unlike that of nuclear non–proliferation, where the failure to regulate early has only made it harder down the line – and as the most powerful nuclear weapons today are over 3,000 times more powerful than the bomb dropped on Hiroshima, we all live under the shadow of that failure.

I am reminded of another civilisation that ignored its limits, seeking to utilise the power of common language for the mastery over nature without restraint. To Babel, God said:

If as one people speaking the same language they have begun to do this, then nothing they plan to do will be impossible for them. The Lord confused the language of the whole world, and from there the Lord scattered them over the face of the whole earth.”

Human creativity – our ability to imagine, build, and consolidate beyond what is immediately obvious – is one of our most beautiful qualities, but it is also dangerous without moderation. The markets may be excited by the promise of machine learning, but we will all pay the cost if calls for patience are unheeded.


Interested in this? Share it on social media. Join our monthly e–newsletter to keep up to date with our latest research and events. And check out our Supporter Programme to find out how you can help our work.

 Photo by TARA WINSTEAD on Pexels.

Madeleine Pennington

Madeleine Pennington

Madeleine is Head of Research at Theos. She holds a doctorate in theology from the University of Oxford, and previously worked as a research scholar at a retreat and education centre in Philadelphia. She is the author of ‘The Christian Quaker: George Keith and the Keithian Controversy’ (Brill: 2019), ‘Quakers, Christ and the Enlightenment’ (OUP, 2021), ‘The Church and Social Cohesion: Connecting Communities and Serving People’ (Theos, 2020), and ‘Cohesive Societies: Faith and Belief’ (British Academy, 2020). Outside of Theos, she sits on the Quaker Committee for Christian and Interfaith Relations.

Watch, listen to or read more from Madeleine Pennington

Posted 31 May 2023

Artificial Intelligence, Big Tech, Culture, Society


See all


See all

In the news

See all


See all

Get regular email updates on our latest research and events.

Please confirm your subscription in the email we have sent you.

Want to keep up to date with the latest news, reports, blogs and events from Theos? Get updates direct to your inbox once or twice a month.

Thank you for signing up.