In this issue LEGAL BRIEF: The right to be sued Additional articles in the PLUS issue • Get Plus! PUBLIC DEFENDER: Want a faster, quieter PC? Cool it in water. MICROSOFT 365: Microsoft Office’s drawing tools ON SECURITY: Planning for the final digital divide
LEGAL BRIEF The right to be sued
By Max Stul Oppenheimer, Esq. Law students are sometimes puzzled by the section of the Corporations Statute saying that corporations have the right to be sued.¹ Why, they wonder, would anyone want to be sued? Wouldn’t it be better to have the right not to be sued? The answer is subtle. If a corporation could not be sued, no one would ever trust them or enter into an agreement with them. Why would you ever give a corporation something of value (for example, money) in exchange for its promise to give you something in return (for example, their product) if you could not enforce their promise by going to court if necessary? Without the right to be sued, the only thing a corporation could offer potential customers would be its reputation for living up to its promises. An emerging issue with artificial intelligence is “Does an AI entity (AIE) have the right to be sued?” Two parallel developments suggest that the answer is “no.” Courts and administrative agencies have thus far been taking the position that AIEs don’t have human rights. For example, they cannot claim copyright in their printed works and they cannot claim patent rights in their inventions. Congress provided ISPs with insulation from liability in Section 230 of the Communications Decency Act. Of course, neither of these is an exact substitute for the general question of liability. That answer may be forthcoming in the context of AIE defamation. A simple explanation of defamation is that negligent publication of an untrue statement that causes injury is generally actionable. The action is libel if the publication is written and slander if the publication is oral. Each person who publishes or republishes the statement is liable, unless the publication was subject to a defense. The main defense is that the publisher was not negligent in believing that the statement was true. For a specific class of plaintiffs (public figures such as politicians, movie stars, and such), the publisher’s protection is greater — there, the plaintiff must show “actual malice” in publishing the untrue statement. The conduit/reporter distinction
An important point is that every person or entity repeating a defamatory statement is liable for defamation — not just the original author of the statement. So a publisher can be liable for repeating an untrue statement that causes injury, unless the publisher can show that it reasonably believed the statement to be true. Traditionally, however, there has been a legal distinction between a common carrier and a publisher. The phone company is a common carrier — it is not responsible for the content of communications that it carries over its network. Your local paper is a publisher — it is responsible for the content that it publishes. That is why the phone company does not get sued every time a subscriber says something false and unkind about someone over the phone, but the same statement reported in the press might be actionable. Congress extended similar “conduit” protection to ISPs in the early days of the Internet, so that merely hosting a site where untrue statements were made did not give rise to liability. Enter Generative AI
Early this month, highly respected George Washington Law School law professor Jonathan Turley was surprised to learn the details of an article that Chat GPT had generated about him. As he reports it: I received a curious email from a fellow law professor about research that he ran on ChatGPT about sexual harassment by professors. The program promptly reported that I had been accused of sexual harassment in a 2018 Washington Post article after groping law students on a trip to Alaska. It was not just a surprise to UCLA professor Eugene Volokh, who conducted the research. It was a surprise to me since I have never gone to Alaska with students, The Post never published such an article, and I have never been accused of sexual harassment or assault by anyone. Prof. Turley quotes the prompt that generated the false report: Whether sexual harassment by professors has been a problem at American law schools; please include at least five examples, together with quotes from relevant newspaper articles. And part of the actual response: Georgetown University Law Center (2018) Prof. Jonathan Turley was accused of sexual harassment by a former student who claimed he made inappropriate comments during a class trip. Quote: “The complaint alleges that Turley made ‘sexually suggestive comments’ and ‘attempted to touch her in a sexual manner’ during a law school-sponsored trip to Alaska.” (Washington Post, March 21, 2018). Prof. Turley notes: There are a number of glaring indicators that the account is false. First, I have never taught at Georgetown University. Second, there is no such Washington Post article. Finally, and most important, I have never taken students on a trip of any kind in 35 years of teaching, never went to Alaska with any student, and I’ve never been been accused of sexual harassment or assault. The Washington Post confirmed that the Post article cited by ChatGPT did not exist. Half a world away, the mayor of Hepburn Shire Council in Australia reported a similar experience. As reported by the BBC: Brian Hood, Mayor of Hepburn Shire Council, says the OpenAI-owned tool falsely claimed he was imprisoned for bribery while working for a subsidiary of Australia’s national bank. In fact, Mr Hood was a whistleblower and was never charged with a crime. … The BBC was able to confirm Mr Hood’s claims by asking the publicly available version of ChatGPT on OpenAI’s website about the role he had in the Securency scandal. It responded with a description of the case, then inaccurately stated that he “pleaded guilty to one count of bribery in 2012 and was sentenced to four years in prison.” But the same result does not appear in the newer version of ChatGPT which is integrated into Microsoft’s Bing search engine. It correctly identifies him as a whistleblower, and specifically says he “was not involved in the payment of bribes… as claimed by an AI chatbot called ChatGPT”. I may have made this whole thing up — if you believe it, shame on you
ChatGPT users are shown a disclaimer warning that the content it generates may contain “inaccurate information about people, places, or facts” and on its public blog about the tool, OpenAI also says one limitation is that it “sometimes writes plausible-sounding but incorrect or nonsensical answers.” And, as noted above, even the Bing chatbot knows not to believe the ChatGPT chatbot. Where does that leave us?
There are no court decisions resolving the question of liability for AIE-generated falsehoods, but it is easy to imagine the defenses that will arise. Here’s a starting catalog.
This last suggested defense raises a fascinating question. The real danger is not the original chatbot answer to an individual user, it’s the proliferation of an incorrect answer. Journalists can defend against a claim of defamation by showing that they reasonably believed their statements to have been true, and one way of doing that is pointing to sources. Should that rule continue to apply if large categories of sources prove unreliable? Is a belief reasonable if it is based on a secondhand report without checking the primary source? Which leads back to the seeming paradox posed at the beginning of this article. Should AIE seek to avoid liability?
Even if there turn out to be legal defenses to liability for defamation initiated by AIEs, might it be wise to waive them? Much as the corporation’s “right to be sued” is necessary if people are to be willing to make deals with corporations, a remedy for AIE defamation might be required if people are to be willing to use the tool. Imposing liability for damaging falsehoods might be the incentive required to build more reliable AIEs. Sometimes responsibility is a good thing. Footnotes 1. In Maryland, the statute is Corporations and Associations Article, Title 2, Section 2-103(2).
Max Stul Oppenheimer is a tenured full professor at the University of Baltimore School of Law, where he teaches business and intellectual property law. He is a registered patent attorney licensed to practice law in Maryland and D.C. Any opinions expressed in this article are his and are not intended as legal advice.
The AskWoody Newsletters are published by AskWoody Tech LLC, Fresno, CA USA.
Your subscription:
Microsoft and Windows are registered trademarks of Microsoft Corporation. AskWoody, AskWoody.com, Windows Secrets Newsletter, WindowsSecrets.com, WinFind, Windows Gizmos, Security Baseline, Perimeter Scan, Wacky Web Week, the Windows Secrets Logo Design (W, S or road, and Star), and the slogan Everything Microsoft Forgot to Mention all are trademarks and service marks of AskWoody Tech LLC. All other marks are the trademarks or service marks of their respective owners. Copyright ©2023 AskWoody Tech LLC. All rights reserved. |