Mayor Prepares World’s First Defamation Lawsuit Over False ChatGPT Claim
Aussie mayor Brian Hood of Hepburn Shire is preparing to sue OpenAI if they don’t correct ChatGPT’s false claims that he served time for bribery. The series of events started when Hood discovered that ChatGPT had named him guilty in a foreign bribery scandal involving the Reserve Bank of Australia in the early 2000s. A statement that couldn’t be further from the heroic truth: Hood worked for the Reserve Bank’s subsidiary, Note Printing Australia, and was the whistleblower and was never charged with a crime.
Hood’s legal case could be a landmark moment in the AI and IT space as it would apply defamation law to a new area of artificial intelligence and publication. Hood’s lawyers claim that numerous people may have accessed the false information, and the payout could be more than $200,000. Defamation damages payouts in Australia are typically capped at around $400,000.
This is just the first in a long line of open-ended legal issues surrounding ChatGPT and AI. Professor Uri Gal from the University of Sydney Business School explains how “ChatGPT is becoming a data privacy nightmare,” after Science Focus revealed that OpenAI fed ChapGPT more than 300 billion words scraped from the internet; including books, articles, websites and posts.
RELATED: Hilarious AI Harry Potter Stars in Balenciaga Campaign Spoof
Hood’s lawyers have sent OpenAI a legal letter that gives them 28 days to fix their mistake or face a potential defamation lawsuit. If Hood sues, it would be the first time someone has sued the owner of ChatGPT for claims made by the AI language model. Since ChatGPT launched last year, it’s become a popular tool, and even Microsoft has integrated it into Bing.
Speaking to Reuters, James Naughton, a partner at Hood’s law firm Gordon Legal, said, “It would potentially be a landmark moment in the sense that it’s applying this defamation law to a new area of artificial intelligence and publication in the IT space,” with plenty of ramifications to follow should the plaintiff be successful in their defamation case.
“He’s an elected official; his reputation is central to his role,” Naughton said. Hood relied on a public record of shining a light on corporate misconduct, “so it makes a difference to him if people in his community are accessing this material”.
Hood’s reputation is central to his role as an elected official, says James Naughton, a partner at Hood’s law firm Gordon Legal. Hood relies on a public record of exposing corporate misconduct, so it’s critical that people in his community aren’t accessing false information about him.
Naughton also accuses ChatGPT of not providing footnotes, citations or sources, making it difficult to verify the accuracy of the AI’s answers. The case highlights the challenges posed by using AI to make potentially damaging claims about individuals. It’s a landmark moment for defamation law in the AI space, and it’s critical to the mayor’s reputation since he’s an elected official who relies on a public record of exposing corporate misconduct.
If Hood files a lawsuit, it will accuse ChatGPT of giving users a false sense of accuracy by failing to include footnotes, Naughton said.
“It’s very difficult for somebody to look behind that to say, ‘How does the algorithm come up with that answer?'” said Naughton. “It’s very opaque.”
It’s a landmark moment for defamation law in the AI space, and it’s critical to the mayor’s reputation since he’s an elected official who relies on a public record of exposing corporate misconduct.
Earlier this month, OpenAI CEO Sam Altman told ABC News that while ChatGPT could be “the greatest technology humanity has ever developed,” he was “particularly concerned that these models could be used for large-scale disinformation.”
“Still flawed, still limited”.
But what does ChatGPT have to say about all of this? Well, we asked. And her was their response.
Prompt: What is your response to the possible litigation that accuses ChatGPT of giving users a false sense of accuracy by failing to include footnotes?
Answer: “As an AI language model, I don’t have a personal response to the possible litigation against ChatGPT for giving users a false sense of accuracy by failing to include footnotes. However, I can provide some insight into the issue.
The lack of footnotes in ChatGPT’s responses makes it difficult for users to verify the accuracy of the information presented. This issue highlights one of the challenges posed by the use of AI in making potentially damaging claims about individuals. It’s important for AI language models to be transparent about their sources and how they arrive at their conclusions.
Incorporating footnotes or providing links to sources could help increase the accuracy and reliability of information presented by AI language models like ChatGPT. As AI continues to become more prevalent in our lives, it’s crucial to ensure that the information it presents is accurate and trustworthy.”
So there you have it. ChatGPT agrees: It’s important for AI language models to be transparent about their sources and how they arrive at their conclusions. But whether or not the courts agree with Gordon Legal and Aussie mayor Brian Hood of Hepburn Shire is another matter altogether.
If you’d like to read about more Aussie battlers in the court of law, check out the time Kanye West threatened legal action against a Melbourne-based burger shop for their College Dropout Burgers.
You’ll also like:
‘GPT-4’ Unleashed: OpenAI’s ‘Most Advanced’ AI Chatbot System Explained
ChatGPT Helped Write the Latest ‘South Park’ Episode and It’s Surprisingly Meta
Elon Musk and AI Experts Sign An Open Letter Petitioning to Pause ‘Giant AI Experiments’