Meta and Google using user comments or reviews as part of generative AI responses to queries on restaurants or to summarise sentiment could introduce new defamation risks, experts have warned.
In Australia, when a user makes an allegedly defamatory post or review on Google or Facebook it is usually the user that faces legal action for defamation. But a landmark 2021 high court ruling in Dylan Voller's case against news outlets - over comments on their social media pages relating to the young Indigenous man's mistreatment in Don Dale youth detention centre - has also held that the page that hosts a defamatory comment, such as news pages on Facebook, can also be held liable.
The tech companies are occasionally taken to court in Australia. Google was forced to pay former deputy NSW premier John Barilaro more than $700,000 in 2022 over hosting a defamatory video, and the company was ordered to pay $40,000 in 2020 over search results linking to a news article about a Melbourne lawyer.
Last week, Google began rolling out changes to Maps in the United States, with its new AI, Gemini, allowing people to ask Maps for places to visit or activities to do, and summarising the user reviews for restaurants or locations.
Google also began rolling out AI overviews in search results to Australian users last week that provides summaries of search results to users.
Meta has recently commenced providing AI-generated summaries of comments on posts on Facebook, such as those posted by news outlets.
Michael Douglas, a defamation expert and consultant at Bennett Law, said he expects to see some cases reach court as AI is rolled out into these platforms.
"If Meta sucks up comments and spits them out, and if what it spits out is defamatory, it is a publisher and potentially liable for defamation," he said.
"No doubt such a company would rely on various defences. It may argue 'innocent dissemination' under the defamation acts, but I am not sure that the argument would get very far - it ought to have reasonably known that it would be repeating defamatory content."
He said they may rely on new "digital intermediaries" provisions in defamation laws in some states, but he said AI may not be in the scope of the new defences.
Prof David Rolph, a senior lecturer in law at the University of Sydney, said an AI repeating allegedly defamatory comments could be a problem for the tech companies, but the introduction of the serious harm requirement in recent defamation reforms may reduce the risk. He said, however, that the recent reforms were introduced prior to the widespread availability of large-language model AI.