Theos

Home / Comment / In brief

Virtual Defrocking: AI can only take your job if you let it

Virtual Defrocking: AI can only take your job if you let it

In light of the launch (and subsequent defrocking) of Catholic Answers’ AI chatbot ‘Fr Justin’, Nathan Mladin explores how we can effectively harness AI while preserving human relationships. 26/04/2024

We’re at the dawn of a new technological revolution with AI already proving to be a highly disruptive technology whose impact will be felt across all sectors of society and aspects of life for years to come.

But the conversation, particularly around the automation of work, easily gets mired in a popular misconception. Bluntly, AI is not inherently poised to usurp any roles. It can do no such thing. The notion of ‘technological determinism’ – the belief that technology’s development follows an inevitable path that humans cannot alter – is not only misleading but ultimately dehumanising. It should be vigorously resisted. Coupled with the personification of AI, technological determinism is not only a masking of human interests – generally moneyed – behind lifeless technology but also a failure, or even an abdication, of human responsibility. Because to be human is fundamentally to be responsible for the dynamic web of relationships in which one is born, “lives, moves, and has one’s being.”

Today, and indeed always, it is humans who have the power and responsibility to decide when, where, and how to integrate (or not) technology into their lives, relationships, and societies.

AI’s influence is evident, but human control remains paramount, as demonstrated by the public response this week to the ‘Father Justin’ AI developed by Catholic Answers. Designed to “provide users with faithful and educational answers to questions about Catholicism” through a ‘friendly’ and knowledgeable avatar, the app quickly faced backlash for seeming to encroach on the authority and nuanced role and relationship with a real priest. In response, the developers modified the product to represent a member of the laity instead.

Responsibly integrated, AI systems can add significant value. For example, AI applications in healthcare are helping manage patient care more effectively, allowing doctors to spend more time on complex cases that require a human touch. AI also brings value in treatments, using complex algorithms to save lives by identifying health risks and treatment opportunities that might elude human practitioners. Similarly, in education AI is able to provide personalized learning paths to students, making education more accessible and tailored to individual needs and abilities.

But as more and more highly capable AI systems and ‘agents’ are developed, the challenge is to carefully consider which aspects of thought, of creativity, and of interpersonal relationships can be ‘outsourced’ to technology and which should not. This is difficult, as unintended consequences will not be apparent and what may be lost in the medium and long–term will not be as easy to ascertain as what is gained in the moment.

When it comes to human–AI interactions, the design and use of AI should be approached with a commitment to truthfulness and transparency. One way to do this is to ensure AI systems show their ‘otherness’, with clear indicators that they are not human – for example, by using more formal language or avoiding colloquialisms and human linguistic idiosyncrasies. With ‘friction’ in human–AI interactions, we have a better chance of resisting manipulation and self–deception, and safeguarding the authenticity of human relationships.

But we cannot be naïve. This will be an uphill battle. Tapping into fundamental features of our personhood, not least our innate desire for human encounter, realistic simulations of persons and seamless AI–human interactions make ‘commercial sense’ – i.e., they sell better – even as they may be corrosive to basic human goods and relationships.

‘Clunkier’ and less ‘persuasive’ AI agents which tell the truth about what they are will demand sacrifice. But if we are to have a future in which AI supports human capabilities and relationships rather than undermines them, we will need investors, builders, and executives who are willing to sacrificially put people above profit and human relationships above maximum return on investment. May their tribe increase.

 


Interested in this? Share it on social media. Join our monthly e–newsletter to keep up to date with our latest research and events. And check out our Supporter Programme to find out how you can help our work.

Tara Winstead on Pexels.

Nathan Mladin

Nathan Mladin

Nathan joined Theos in 2016. He holds a PhD in Systematic Theology from Queen’s University Belfast and is the author of several publications, including the Theos reports Data and Dignity: Why Privacy Matters in the Digital Age, Religious London: Faith in a Global City (with Paul Bickley), and ‘Forgive Us Our Debts’: lending and borrowing as if relationships matter (with Barbara Ridpath).

Watch, listen to or read more from Nathan Mladin

Posted 26 April 2024

Artificial Intelligence

Research

See all

Events

See all

In the news

See all

Comment

See all

Get regular email updates on our latest research and events.

Please confirm your subscription in the email we have sent you.

Want to keep up to date with the latest news, reports, blogs and events from Theos? Get updates direct to your inbox once or twice a month.

Thank you for signing up.