Artificial intelligence is making great strides in shipping, but those who take its advice on maritime law will do so at their peril.

That is the discovery of US professor Curtis Bell, who has been testing the popular new AI robot ChatGPT.

ChatGPT has made headlines since mid-December for its instantaneous, confident-sounding and often plausible AI-generated answers to all questions under the sun.

Shipping has not ignored the new technology. TradeWinds recently reported that Clarksons Securities analyst Frode Morkedal was “extremely impressed” after a test drive.

Bell is also generally positive. He has found ChatGPT a practical alternative to a Google search for some purposes.

“Instead of a list of links that you have to click through and read, it gives you a summary,” he told TradeWinds.

At the US Naval War College in Newport, Rhode Island, Bell teaches a professional-level maritime security and governance class to visiting foreign naval officers, which covers subjects such as piracy, environmental enforcement, port security and other coast guard functions handled by the navy in many countries.

My faith was crushed

“We cover the law of all kinds of maritime operations short of combat,” he said.

Bell has not yet used the free online AI source to plan lectures, but he may do so. He has found that on topics where he is familiar with the content and can vet the results, it turns out solid 60-minute outlines.

But recently, while using ChatGPT to research a US-Canada border dispute in the Beaufort Sea, he discovered it making up basic facts.

“It provided a really useful summary of the dispute until it said that both states had ratified the United Nations Convention on the Law of the Sea [Unclos],” he said.

“My faith in ChatGPT [was] crushed,” he told his Twitter followers on 28 December, after probing the bot further on the Unclos question.

“This might be the easiest question in international maritime law, and ChatGPT whiffs it and doubles down,” he commented.

Unclos, which came into force in 1994, is fundamental in regulating international maritime claims. Unclos gave rise to the doctrine of exclusive economic zones and provides the authority for many International Maritime Organization regulations, including Marpol and AIS rules.

The US played a big part in negotiating it, but to the frustration of many other maritime states, it has never ratified it.

ChatGPT insists that it has.

In responses Bell has recorded, it fleshed out its AI-generated misinformation with specific names and dates, and pushed back when he gave it a chance to change its artificial mind.

To be sure, the bot’s maker, OpenAI, disclaims omniscience.

Users are forewarned that ChatGPT “may occasionally generate incorrect information” and has “limited knowledge of world and events after 2021”.

But its misinformation on Unclos falls well before that limit.

Using technically precise language to describe events that never happened, ChatGPT told Bell: “The US Senate gave its advice and consent to ratification of the treaty on 7 June 1994 and President Bill Clinton deposited the instrument of ratification with the UN on 15 May 1997.”

Bell tried to get a better answer by approaching the question from the other end, asking what Bill Clinton had done on that particular day.

Frode Morkedal, managing director of equity research at Clarksons Securities, is a fan of ChatGPT. Photo: Clarksons

The bot stuck to its story: that was the day he deposited the Unclos instrument of ratification with the UN, it insisted.

Giving the bot one last chance to redeem itself, Bell asked the question in a way that implied, correctly, that the US never ratified Unclos, and asked why it had not.

ChatGPT then added insult to misinformation.

“I recommend checking with a reliable source, such as the UN or the US Department of State, to confirm the status of the US as a party to Unclos,” sniffed the robot.

No harm was done to Bell, as an expert in the field. But he believes there is a danger if uncritical users rely on the bot as an alternative to an internet search.

“Harm can be done in the same way as when people find erroneous information through a Google search and rely on it,” he told TradeWinds.

Not the least of his concerns is how students will be tempted to use AI-generated results in their assignments, and afterwards in their professional work.

“It returns different answers to different users, so it won’t be caught by our plagiarism detection software,” he said.