Does ChatGPT lie to please you?

With Apple releasing Apple Intelligence and the integration of ChatGPT into iOS 18, how confident are we on the accuracy of ChatGPT?  And is it going to feed us a web of lies? Or has it upped it’s game since we wrote this article?

Can you use ChatGPT in business? Lawyers, Victor Hugo and the Perils of Artificial Intelligence.

Unless you have been living on a different planet, you will have noticed countless articles discussing the impact of artificial intelligence on business. More specifically, there has been lots of discussion on the impact that ChatGPT is having on different business sectors. Is it all hype, and are there risks involved?

The short answer to ‘Can you use ChatGPT in business?’ is ‘Yes, but with careful moderation’. If that’s enough information for you, then please move on to the next article. If you want to understand the detail behind those answers, then read on…

Firstly, is it all hype? One of the problems that coverage of ChatGPT suffers from is that it is often referred to as ‘artificial intelligence’. But that is really a misnomer, and an understanding of how it works can help you to understand its limitations. ChatGPT is what is called a ‘large language model’. It is a neural network which is self-trained (or semi-supervised) to analyse billions of language parameters in order to mimic natural, human language. Essentially it is using language as building blocks, adding each word which makes the most sense after analysis of billions of different language patterns. But the important word here is ‘mimic’: this doesn’t make it intelligent in the human sense; in many ways it is just a very clever parrot.

Albeit a parrot which has billions of bits of information readily to hand. It can rapidly scrape lots of information from the internet and engage in what seems like a ‘normal’ conversation with you as you ask it a series of questions. Ask it for some advice on holiday ideas in Italy based on your personal preferences and it will throw out some suggestions. Ask it to put together some Italian meal ideas, complete with recipes and a ready made shopping list and it will oblige. With this you are in safe territory.

But ask ChatGPT about something it doesn’t know and sometimes…it just makes up an answer, inventing completely spurious information. It’s almost as if it doesn’t want to disappoint you by admitting a gap in its knowledge. Ask it to back up claims and it can invent non-existent books, articles and academic papers and cite references with chapter and page numbers. Unless you double checked them you’d think they all existed. Some of them do. But sometimes they are a complete invention.

There is even a technical term for this phenomenon when so-called artificial intelligence feeds you plausible-sounding falsehoods: hallucination. 

An American lawyer, Steven Schwartz, was recently reprimanded by a US district court judge when it turned out his brief to the court contained six court decisions which he had used to justify his argument. The problem was that none of these court decisions actually existed – he had used ChatGPT as a shortcut and hadn’t thought to check that he had been fed a diet of hallucinations. 

I have attempted to use ChatGPT to help me in some research I was conducting on Victor Hugo’s ‘Les Misérables’ (ChatGPT can handle multiple languages, including French). ‘‘Les Misérables’ is a vast book, made up of five different volumes which are divided up into different books which are then divided into chapters, each with its own title. In total 48 books and 365 chapters. So using ChatGPT to interrogate the contents seemed like a clever shortcut.

When asked if a particular phrase appeared in the book, ChatGPT confidently responded with the chapter name and number, and a translation into English. On checking I discovered that not only did the passage not exist, but the chapter name was a complete fabrication. When challenged, ChatGPT admitted its error, apologised (it is always polite), and cited a different chapter name and number – which also turned out to be false. Again and again ChatGPT invented paragraph after paragraph of 19th century French which it claimed was a quote from ‘Les Misérables’. All completely made up.

None of this, it has to be said, is particularly serious. Nobody was harmed by these Hugoesque hallucinations. And whilst the lawyer’s predicament may have had a detrimental effect on his career, it is difficult to be that sympathetic.

But what if ChatGPT were used by a business to write articles or help guides which included false information, and then one of its clients suffered harm as a result? Would your Professional Indemnity Insurance cover you for such an eventuality? Clearly it is unwise to trust ChatGPT with anything which includes vital pieces of information without fact-checking before publication. And even if it is factually correct, ChatGPT’s output is often bland, flat prose which doesn’t reflect a person’s (or an organisation’s) personality. Its voice is the voice of mediocrity.

But the threat goes much further than the risk of AI producing pages and pages of middle of the road platitudes to help you meet your business’s social media output target. In a world flooded with disinformation and conspiracy theories, what safeguards are there in place to ensure that ChatGPT and its ilk are not used by the naive to justify political opinions which are abhorrent? Is it possible to hallucinate our way into a world where AI manipulates the facts as much as, say, politicians? 

This article, by the way, was written by a human.