Stop handwringing about new tech

There’s still a lot of talk floating around ChatGPT in Christian circles. How dangerous is it? (A Christian classic). Will it destroy the world? (A conspiracy classic). Will it replace jobs and, if so, whose? (An ongoing concern). There are lots of questions floating around.

Now, I understand the questions. I get some of the concerns. But I am also convinced a bit of perspective is in order. There has never been a new technology – whether we’re talking about the Gutenberg printing press, the horseless carriage (or, motor cars) or the internet – where Christians have not got their knickers in a twist and convinced themselves it is the most pressing issue since Satan nodded in the direction of a particular tree and went, ‘fancy a bit of that?’

The thing is, it is a tool. As with any tool, it can be used helpfully and it can be used badly. A hammer can be wielded in such a way as it helps build amazing monuments or it can be waved around in such a way that it kills a man. A car can be used to link people together or it can be used as means of enabling people to escape from crime scenes and cover their wrongdoing. Just about every other tool, piece of technology, or thing ever invented is the same. Tools can be used well and utilised as a means of flourishing or they can be wielded terribly and cause misery and destruction. There is no reason to consider ChatGPT any differently. Indeed, no reason to view AI in general any differently.

As with any tool or tech, the question revolves around the best way to use it. It’s no good swinging a hammer when what is needed is a plunger. It’s no good using a screwdriver when what you need is a saw. In the same way, new tech might be helpful and doesn’t have to be viewed as essentially problematic, so long as we recognise what it can do well and what it can’t.

In the case of ChatGPT, everyone seems to be focusing on the ‘what if’ questions. Few seem to focus on what it can do and how it might be applied helpfully. Many are asking, what if pastors start using it to write their sermons? To which the answer, it seems to me, is that most even barely well-taught congregations would know. If your sermons are ‘just as good’ using AI alone, I am minded to think that says more about the quality of your sermons rather than the credibility of ChatGPT. Whatever it may be able to do on the exegesis and explanation front (and I’m not sure it can do as well as some people seem to think), it can’t apply a sermon meaningfully to the people in your congregation that it doesn’t know nor has much, if any, data on.

But what I think ChatGPT can do well is take a sermon that has already been written by an real-life pastor and helpfully distil the information down into a helpful summary. It is a tool that can helpfully summarise and distil masses of information. Whilst we can do that ourselves, it takes a lot of the leg work out of chasing after reading and references or thinking through how we would summarise something succinctly. That, at least in our context, is particularly helpful when it comes to running home groups that dig into Sunday’s sermon a little more and require a short summary of the sermon to remind us what we said. It is a time saver.

What is more, the summary skills it has are valuable on at least two fronts, but only when overseen by a person who knows their onions. First, it is helpful when the information being input by an individual is good. It cannot be trusted to select good sources. If you ask it a general question, it will distil all the data it can find but it will have no judgement on what is good or bad. It requires someone to assess that and then input useful data. Second, given that it is prone to giving answers that sound cogent and credible – and making up such credible sounding answers when it has no clue – it is only valuable when overseen by someone who can pick their way through that tendency. You need someone to oversee the data it collates and you need someone, when it outputs its answer to your question, to assess whether its answer is credible.

For instance, if you ask it about John Bunyan’s view of close communion it answers:

Regarding the practice of close communion, Bunyan advocated for the participation in the Lord’s Supper to be restricted to baptized believers who were members of the local church. He argued that the Lord’s Supper was a sacred ordinance that should be observed by those who had publicly identified with Christ through baptism and had been admitted into the visible church. He believed that close communion was a means of preserving the purity and integrity of the sacrament.

Similarly, if you ask did John Bunyan consider believer’s baptism necessary for church membership it says:

Yes, John Bunyan considered believer’s baptism as necessary for membership in the local church. He held to the belief that baptism should follow a personal profession of faith in Jesus Christ. According to Bunyan’s view, only those who had been baptized upon their conscious decision to follow Christ were eligible for membership in the church.

If you know anything about Bunyan’s position on the matter, and have any familiarity with the discussions he had with William Kiffin on these issues, this is made up nonsense. The exact opposite of what he believed. Bunyan didn’t take those views. He thought them very wrong. He wrote and printed a work called, Differences in Judgement About Water-Baptism, No Bar to Communion. But if you didn’t know and were asking the question generally, and hoping to rely on the answer, ChatGPT leads you up the garden path – very convincingly, and self-assuredly, but nevertheless wrongly.

But, to come back to the matter at hand, what difference is there between that and people accessing books, articles and other such things that also speak half-truths or, sometimes, just out and out nonsense? Since when is it new for people to have to parse information and determine its credibility? Since when did people not need to engage with those who had better knowledge to see if what they have read and understood to be the case is actually the case at all? It seems to me, for all the noise, it remains the case that there really is nothing new under the sun.

At the same time, just as Christians have a tendency to do with every other new technology or advance in tools, we go through cycles. The cycle is almost always the same. We are adamant every new technology is dangerous, before coming eventually to embrace it – partly because our children start using it and find it useful and not especially any more dangerous than the tech their grandparents told their parents was dangerous – and we soon move on to handwringing about the next advance in technology instead. Christians have been doing this for centuries.

I am sure, as with any technology, AI has its dangers. Clearly, as I showed briefly above, ChatGPT is not without its evident flaws. But we always seem to focus on the apparent dangers – which usually sounds remarkably close to ‘don’t touch, don’t taste, don’t handle’ with its appearance of spirituality – and very rarely consider the possibilities and uses. I am not suggesting we should be blind or ignorant of possible danger. I just think we are so often overly alive to them. We over-emphasise them. The dangers we spot in these things very quickly turn into world-changing, major threats that must be opposed. It is almost like we have such a low view of the Holy Spirit that we allow these worries and anxieties to dominate, like this latest tech and whatever problems it may bring as somehow beyond his power to keep us or to work for our good in them. If nothing else, it is just exhausting. Perhaps we just need to chill out a bit.

I will be honest, I am rarely convinced when Christians highlight new tech and its dangers that it will prove to be anywhere near as serious as many claim. If we really heeded every Christian claxon on these things, the Amish would be looking on us with pity for our old-fashioned ways. You really do get the impression some would not be content until they had told us 5-reasons why this new-fangled wheel technology is really quite dangerous! In the end, AI is a tool. As with any tool, if it is put to use in the right way, it can be immensely helpful. If it is not used rightly, it can be particularly unhelpful. But we don’t all get handringy about carving knives and screwdrivers – writing article after article about the apparent dangers – we just know they are useful, more useful than not, and use them as intended. There is no reason for us to worry unduly about the latest tool on offer, that might similarly be used well or not, that may well require careful handling. In the end, it may well have its uses – and it pays to be aware of the drawbacks – and for us to just get on with our lives safe in the knowledge that Christ is with us by his Spirit and there really is nothing new under the sun.

2 comments

Comments are closed.