By Mbilike M. Mwafulirwa
Imagine yourself – an upstanding and accomplished attorney or judge – surfing the internet in your down time. While doing so, you come across a video clip of a familiar person spewing deplorable things about your peers and admitting to serious crimes. To your horror, the person in the video looks exactly like you, they sound just like you, and that person acts like you. The only thing you’re certain of is it’s not you and you do not have an identical twin. But for all you know, everyone you know believes it’s you in the video clip. What then for you?
Welcome to the age of the deepfake. Like fake news, a deepfake is a more devilish and recent derivative of the technological advances of our time. Simply put, deepfakes are highly sophisticated, malicious and convincing fake audio and video that make any person appear to say or do something they did not.[1] When done right, the most sophisticated deepfakes can be nearly impossible to detect.[2] That should concern us all. Our legal system relies heavily on audiovisual evidence. Courts, for example, routinely resolve summary judgments based on video or audio evidence. Juries, courts recognize, find it difficult to overcome what they see or hear.[3] Thus, the key question becomes: What do we do when our legal system can no longer trust what we see or hear?
That important concern reverberates into the constitutional realm. To begin, an adverse judgment based on false evidence raises serious due process concerns. Likewise, the First Amendment comes into play in multiple ways. First, there are U.S. Supreme Court cases on content restrictions to deal with when any governmental solution for deepfakes is under consideration. Second, drawing a line between protected fake speech – for example parodies and satire – as opposed to deepfakes, requires careful analysis. Third, that analysis inevitably brings into sharp focus First Amendment and Due Process rights to petition for redress in the courts for malicious falsehoods. Finally, because deepfakes also affect national security and democratic processes, a word or two on the constitutional considerations on those subjects is warranted.
THE PROBLEM DEFINED – DEEPFAKES
Superimposing facts, images, video and speech is not new in America. Traditionally, satire, parody and caricature edit, insert or superimpose deliberate exaggerations into real situations for humorous or critical effect.[4]Intellectual property law has, likewise, long recognized that to foster innovation, a fair degree of borrowing, improving and superimposing is inevitable.[5] In recent times, Hollywood has used superimposing technology in movies and television shows for entertainment. The classic movie Forrest Gump, for example, superimposed Tom Hanks’ character in several important historical events,[6] and in this age of social media, most of us are familiar with Photoshop, a software that allows users to alter pictures artificially to improve them.[7]
Deepfakes are different. Deepfakes are highly believable fake video and audio created using advanced software for malicious ends.[8] The first traces of deepfakes superimposed celebrities’ faces on pornographic actors.[9] Soon though, deepfakes charted into new terrain: a deepfake video of President Obama making a public service announcement about civility in public discourse. Then, a fake video of Mark Zuckerberg confessing that Facebook is misusing user data also surfaced.[10]
Worryingly, both the President Obama and Mark Zuckerberg videos were (to the naked eye) undetectable. As noted, when done right, sophisticated deepfakes are nearly impossible to detect even for the most adept among us. To compound matters, even with the aid of advanced forensic technology, detecting well-done deepfakes is a highly challenging endeavor.[11],[12] What’s more, unlike the human-generated spliced videos or edited pictures, films or audio of old, artificial intelligence largely generates deepfakes.[13] That helps explain the generally high-quality end product.
These days, celebrities and politicians are no longer the only targets of deepfakes. Private figures and businesses are now in the deepfake mix. Consider first the plight of private individuals. In June 2017, an app called DeepNude made headlines because its deepfake technology can turn any photo of a woman into a real-looking nude picture.[14] In fact, DeepNude’s process is streamlined – just upload a photograph of any woman and the app’s software does the rest.[15]
Experts have also recognized the acute risks that deepfakes pose for private businesses. Consider, for example, the night before a company undergoes its initial public offering (IPO). An IPO is the process by which a private company becomes publicly traded and, for many companies, is the Holy Grail. If a disturbing deepfake were to surface, for example, suggesting that a company or its leadership had engaged in serious criminal activity, it could seriously disrupt that process.[16]
Deepfakes also affect some of Americans’ most fundamental “constitutional rights: the rights to participate equally in the political process, to join with others to advance political beliefs, and to choose their political representatives.”[17] In these politically divisive times, the potential for deepfake recordings to cause pandemonium is exceedingly high. As experts have recognized, for example, a highly offensive deepfake released when emotions and frustrations are at tipping point could cause certain groups to lose their cool.[18] Likewise, a deepfake of similar nature, released on the eve of a major election, depending on what was shared, could affect the outcome.[19]
The internet and social media have given deepfakes added bite. As the U.S. Supreme Court has recognized, most First Amendment activity these days takes place on the “vast democratic forums of the internet.”[20] For that reason, within an instant, anyone can engage the world with any message. That is because “[s]ocial media offers ‘relatively unlimited, low-cost capacity for communication of all kinds.”[21] Studies have also shown that when people correspond online (particularly on social media platforms), they are more likely to lie.[22] The impersonal nature of social media and the relative ease by which users can mask their identities empowers people to say or do things online they would not do in real life.[23] This propensity to promote false online speech and personas only adds to the likely continued proliferation of deepfakes.
AUDIOVISUAL EVIDENCE AND THE ENIGMA OF DEEPFAKES FOR COURTS
In American courts, video and audio evidence has high currency. There is nothing more damning or clarifying than a video or audio clip that clears up what happened.[24] The U.S. Supreme Court, for one, has shown itself receptive to that kind of evidence. In Scott v. Harris, for example, the court had to resolve a use of force civil case with resort to video evidence.[25] What makes Scott v. Harris remarkable is not so much its result; in Fourth Amendment cases, courts usually grant qualified immunity to police officers (and dismiss cases) unless they violated clearly established law.[26] Statistically, that has proven a high bar for plaintiffs to overcome.[27]
What stood out about Scott v. Harris, however, was the Supreme Court’s near unquestioned embrace of video evidence. In resolving the summary judgment questions before it, the court “was happy to allow the video tape speak for itself.”[28] The court adopted a new summary judgment rule when there is audiovisual evidence. When “opposing parties tell two different stories, one of which is blatantly contradicted by the record, so that no reasonable jury could believe it, a court should not adopt that version…”[29] Lower courts have extended Scott to other objective evidence like audio recordings and pictures.[30] Thus, audiovisual evidence has become dispositive in civil cases.[31]
Audiovisual evidence has also had a profound effect in criminal cases. From their own experiences, courts recognize that juries too are particularly susceptible to what they see and hear. When juries hear or see something from an audiovisual medium, it is hard to get them to see what else might be there.[32]
Therein lies the rub about deepfakes. Besides being highly sophisticated and convincing, when they’re done right, as noted, deepfakes can be nearly impossible to detect.[33] That’s been the concern of the federal law enforcement and national security agencies. As shown, courts have readily embraced audiovisual evidence.[34] As deepfakes are becoming even more sophisticated, the detections problems will likely become acute. So, if the very best players in internet, social media and law enforcement have not found a readily discernable way to detect deepfakes, how can courts do any better? That should concern us all.
THE CONSTITUTIONAL DIMENSIONS OF DEEPFAKES
Deepfakes affect both private and public interests. Thus, any searching constitutional analysis must consider how deepfakes affect both. We turn to that analysis.
How Deepfakes Affect Private Interest
Because of First Amendment speech rights, “it is a prized American privilege to speak one’s mind.”[35] The freedom to speak our minds, the U.S. Supreme Court has noted, “is essential to the common quest for truth and the vitality of society as a whole.”[36] Audiovisual data and software are forms of speech.[37] Recall, at their core, deepfakes are simply false data. The First Amendment does not generally protect defamation.[38] When false speech harms another, the law affords a remedy. In Oklahoma, a “man’s good name and reputation is his most valuable personal and property right and one that no man may wrongfully injure or destroy without being held accountable…”[39] Below, we consider the most common remedies Oklahoma law provides a person who suffers a reputational harm.
Begin with defamation. The law of defamation protects reputations. That body of law provides a remedy when one person publishes falsehoods about another, without justification.[40] When the plaintiff is a public figure, the U.S. Supreme Court has added a judicial gloss; the person must show that the falsehood was made deliberately or with reckless disregard about its falsity.[41]
Consider next the Oklahoma false light tort. Although false light and defamation overlap in some respects, the two are also different. In false light claims, “actual truth of the statements is not necessarily an issue, [but] a false impression relayed to the public is.”[42] Liability rests on publication of major misrepresentations of “character, history, activities or belief” that could be seriously offensive to a reasonable person.[43] Additionally, a plaintiff must show that the publication was made with knowledge of or reckless disregard of its falsity (the actual malice standard).[44] That standard – the Oklahoma Supreme Court has stated – “is a formidable one.”[45]
Those who abuse others online fare no better. As the U.S. Supreme Court has made clear, “personal abuse is not in any proper sense communication of information or opinion safeguarded by the Constitution.”[46] Understood in that sense, those who intentionally inflict emotional distress on others could also ordinarily be held liable.[47] But the U.S. Supreme Court’s decision in Snyder v. Phelps may have cabined that rule: the court held that tort liability for intentional infliction of emotional distress is inappropriate when the offensive speech is about “a matter of public concern at a public place.”[48] That matters because the Supreme Court’s decision in Packingham v. North Carolinaappears to have recognized that Facebook (and the internet generally), is a public place for exchange of speech.[49]If the target of the defamatory abuse is a public figure, the court has superimposed First Amendment constraints.[50]To recover, a public figure plaintiff has to show 1) falsity in the communication; and 2) that falsehood was published with knowledge that it was false or with reckless disregard of its truthfulness.[51]
Deepfakes rest on their weakest constitutional footing when they cause freestanding proprietary harms. Outside reputational and emotional harms, the U.S. Supreme Court has refused to apply the New York Times v. Sullivan, actual malice standards when a public figure plaintiff claims injury to property interests, as opposed to “feelings or reputation.”[52] The court reiterated that important dichotomy when it denied relief to Reverend Jerry Falwell for injuries to his feelings and emotions.[53] In Zacchini v. Scripps-Howard Broad. Co., the court permitted a plaintiff to recover for invasion of privacy for the misappropriation of a creative stunt methodology.[54] Thus, if a deepfake engaged in a Zacchini-like tort – that is, it targets an injured party’s commercial interests – liability would be proper.
Deepfakes and the Public Interest: Governing Constitutional Considerations
The criminal law is a tool for vindicating the public interest.[55] In fact, “[o]ur entire criminal justice system is premised on the notion that a criminal prosecution pits the government against the governed…”[56] That realization requires an analysis of deepfakes and prosecutions.
Justice Thurgood Marshall long ago observed, “It is the State that tries a man, and it is the state that must ensure that the trial is fair.”[57] Beginning with Brady v. Maryland,[58] the U.S. Supreme Court requires prosecutors to disclose to a defendant evidence favorable to him when that evidence materially bears on guilt or innocence.[59] That is because the prosecutor’s job, according to the U.S. Supreme Court, is to ensure that justice is done.[60] Against that background, the court has held that a prosecutor’s use of evidence she knows is false violates the Due Process Clause.[61] In fact, that is also true if a prosecutor fails to correct testimony that she knows is false.[62]
Deepfakes will likely bring into sharp focus a prosecutor’s knowledge of falsity. If, as noted, the prosecutor knows that material audiovisual evidence is false, then her use of it to secure a conviction violates the due process. Straightforward. But, as noted, when well done, the best deepfakes can be nearly impossible to detect for even the most adept professionals. So, under those circumstances, could there still be a due process violation? To begin, nearly impossible does not mean impossible. As a result, if a prosecutor is in a position where she should have known of the deepfake but did not guard against its use, then just as the Supreme Court treats other forms of false evidence in a similar posture, there should be a violation.[63] But if the prosecutor did not know, and even after exercise of diligence could not have known of the falsehood, some courts have refused to find a constitutional violation.[64]Nothing in principle prevents a court from subjecting deepfakes to this established analytical framework.
Criminalizing Deepfakes?
Although the Supreme Court has held that false speech generally has no First Amendment protection, in United States v. Alvarez, the court cabined that rule.[65] In Alvarez, a fractured court held that a statute that criminalized speech only because it was false (without proof of attendant or likely harm to any person) was unconstitutional.[66] The court held that the Stolen Valor Act – the statute at issue – was a content restriction on speech, which are presumptively “invalid.”[67]
Alvarez’s rationale applies to deepfakes and social media postings. If someone generates false audiovisual speech online – for example, falsely claiming to have achieved certain feats of excellence or nobility – Alvarez would likely constrain the hard hand of the criminal law.[68]
The U.S. Supreme Court in Beauharnais v. Illinois and Chaplinsky v. New Hampshire has, however, upheld criminal laws that punish false and offensive words directed at a person or group that would tend to cause a breach of the peace or public disorders.[69] The court has extended the New York Times v. Sullivan actual malice requirement to criminal libel statutes protecting public figures.[70] If such narrowly tailored laws are still on the books, the criminal law could perhaps deal with deepfakes.
Experts have also recognized the potential for deepfakes to claim falsely or depict a public catastrophe or terrorist attack or the like – i.e., the equivalent of falsely shouting fire in a crowded theatre and causing public panic.[71] As Justice Holmes wrote for the Supreme Court 100 years ago, “[t]he most stringent protection of free speechwould not protect a man in falsely shouting fire in a theatre and causing a panic.”[72] Justice Breyer’s controlling concurring opinion in Alvarez v. United States (which Justice Kagan joined),[73] echoed Justice Holmes’ position. He alluded to existing federal statutes and regulations that punish false statements about terrorist attacks, catastrophes or crimes.[74] Based on those rationales, some federal courts have, for example, upheld convictions under the federal Anti-Hoax Statute[75] based on Justice Holmes’ false-fire-shouting rationale.[76] Under these circumstances, if a deepfake were to make false statements about terrorist attacks or public catastrophes causing significant public harm, criminal anti-hoax laws could perhaps apply.
CONCLUSION
Deepfakes are new phenomena that challenge settled expectations on what is real and what is not. Before, with the benefit of a trained eye or ear, it was easier to determine what was real and what was not. Not anymore. Even though the deepfake problem is new, as shown, the old tools that have traditionally dealt with false unprotected speech can provide redress. Until the deepfake technology identification dilemma is resolved, it will remain both a forensic and legal quandary. Experience teaches that people, no less the law, can’t solve problems they can’t define.
Disclaimer: The materials and information on this blog are provided for informational purposes only, and may not reflect the law in your jurisdiction. The information contained on this blog and the specific post(s) you have reviewed or accessed should not be construed as legal advice, nor does the receipt and review of that information create an attorney-client relationship between the individual author(s), this blog, or the recipient of such information. Readers of this blog or any its posts are urged to seek appropriate legal counsel in their respective jurisdiction, state, or country, so that advice tailored to that person’s particular facts and circumstances might be given from those licensed legal professionals.
[1] “Inside the Pentagon’s Race Against Deepfake Videos,” CNN, www.cnn.com/interactive/2019/01/business/pentagons-race-against-deepfakes/ (last accessed Sept. 29, 2019).
[2] Id.
[3] Scott v. Harris, 550 U.S. 372, 380 (2007) (effect of video evidence on summary judgment); Leo v. Long Island R. Co., 307 F.R.D. 314, 326 (S.D.N.Y. 2015) (noting strong effect of video on jury).
[4] See Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569, 580-581 & n. 15 (1994) (satire and parody); Hustler Mag., Inc. v. Falwell, 485 U.S. 46, 53 (1988) (caricature).
[5] Campbell, 510 U.S. at 575 (discussing fair use doctrine in copyright law);
[6] “Inside the Pentagon’s Race Against Deepfake Videos,” CNN, supra text accompanying note 1.
[7] Prepared Written Testimony and Statement of Jack Clark, House Pm. Select Cmt. on Intelligence, docs.house.gov/meetings/IG/IG00/20190613/109620/HHRG-116-IG00-Wstate-ClarkJ-20190613.pdf (last accessed Sept. 29, 2019).
[8] “Inside the Pentagon’s Race Against Deepfake Videos,” CNN, supra text accompanying note 1.
[9] Id.
[10] Id.
[11] Id.
[12] Id.; see also Prepared Written Testimony and Statement of Danielle Keats Citron, House Permanent Select Committee on Intelligence (June 13, 2019), intelligence.house.gov/uploadedfiles/citron_testimony_for_house_committee_on_deep_fakes.pdf (last accessed Sept. 29, 2019).
[13] “Inside the Pentagon’s Race Against Deepfake Videos,” CNN, supra text accompanying note 1.
[14] “This Controversial Deepfake App Lets Anyone Easily Create Fake Nudes of Any Woman with Just a Click, and It’s a Frightening Look Into the Future of Revenge Porn,” Business Insider, www.businessinsider.com/deepnude-app-makes-deepfake-nudes-women-easy-revenge-porn-bullying-2019-6 (last accessed Sept. 29, 2019).
[15] Id.
[16] See D. Citron, Prepared Written Testimony and Statement, supra text-accompanying note 12.
[17] Rucho v. Common Cause, 139 S.Ct. 2484, 2509 (2019) (Kagan, J., dissenting).
[18] See D. Citron, Prepared Written Testimony and Statement, supra text-accompanying note 12.
[19] Id.
[20] Packingham v. North Carolina, 137 S.Ct. 1730, 1735 (2017).
[21] Id. (quoting Reno v. Am. Civil Lib. Union, 521 U.S. 844, 868 (1997)).
[22] See “People More Likely to Lie on Twitter than in Real Life, Survey Reveals,” The Telegraph, www.telegraph.co.uk/technology/social-media/8085772/People-more-likely-to-lie-on-Twitter-than-in-real-life-survey-reveals.html (last accessed Oct. 5, 2019).
[23] Id.
[24] See generally D. Citron, Prepared Written Testimony and Statement, supra text accompanying note 12.
[25] 550 U.S. 372 (2007).
[26] White v. Pauly, 137 S. Ct. 548, 551 (2017).
[27] See, e.g., J.C. Schwartz, “How Qualified Immunity Fails,” 127 Yale L. J. 2, 25-44 (2017).
[28] Scott, 550 U.S. at 378 n. 5.
[29] Id. at 380.
[30]See Rhoads v. Miller, 352 F. App’x 289, 291-292 (10th Cir. 2009) (unpublished).
[31] See M.A. Schwartz et al., “Analysis of Videotape Evidence in Police Misconduct Cases,” 25 Touro L. Rev. 857, 863 (2009).
[32] See U.S. v. Ada, No. S1 96 Cr. 430, 1997 WL 122753, at *2 (S.D.N.Y. March 19, 1997).
[33] D. Citron, Prepared Written Testimony and Statement, supra text accompanying note 12.
[34] See text accompanying supra notes 1,7, 12-14.
[35] N.Y. Times v. Sullivan, 376 U.S. 254, 269 (1964); U.S. Const. Amend. I; Okla. Const. Art. 2, §22).
[36] Bose Corp. v. Cons. Union of U.S., Inc., 466 U.S. 485, 503–504 (1984).
[37] See Mbilike M. Mwafulirwa, “The iPhone, the Speaker and Us: Constitutional Expectations in the Smart Age,” OBJ 90 p. 24 (March 2019) (collecting cases).
[38] See Chaplinsky v New Hampshire, 315 U.S. 568, 572 (1942).
[39] Dusabek v. Martz, 1926 OK 431, ¶8, 249 P. 145, 147; see also Okla. Const. Art. 2, §6.
[40] See 12 O.S. §1441; Grogan v. KOKH, LLC, 2011 OK CIV APP 34, ¶8, 256 P.3d 1021, 1026-1027.
[41] See Sullivan, 376 U.S. at 280; Time, Inc. v. Hill, 385 U.S. 374, 388 (1967).
[42] McCormack v. Okla. Publ’g Co., 613 P.2d 737, 741.
[43] See Restatement (Second) of Torts §652 E cmt. b (1977).
[44] Colbert v. World Publ’g Co., 1987 OK 116, ¶¶15-16, 747 P.2d 286, 290-292.
[45] Herbert v. Okla. Christ. Coalition, 1999 OK 90, ¶19, 992 P.2d 322, 328.
[46] Cantwell v. Connecticut, 310 U.S. 296, 310 (1940).
[47] See OUJI 20.1 (elements of liability for intentional infliction of emotional distress).
[48] 562 U.S. 443, 456 (2011) (emphasis added).
[49] See Packingham v. North Carolina, 137 S. Ct. 1730, 1735 (2017).
[50] See Falwell, 485 U.S. at 56-57.
[51] Id.
[52] Zacchini v. Scripps-Howard Broad. Co., 433 U.S. 562 (1977).
[53] See Falwell, 485 U.S. at 52-53.
[54] Zacchini, 433 U.S. at 572-574.
[55] See Standefer v. United States, 447 U.S. 10, 25 (1980).
[56] Robertson v. U.S. ex rel. Watson, 130 S.Ct. 2184, 2188 (2010) (Roberts, C.J., dissenting).
[57] Moore v. Illinois, 408 U.S. 786, 810 (1972) (Marshall, J., dissenting).
[58] 373 U.S. 83 (1963).
[59] Id. at 87.
[60] Banks v. Dretke, 540 U.S 668, 696 (2004); Berger v. United States, 295 U.S. 78, 88 (1935).
[61] Mooney v. Holohan, 294 U.S. 103, 112 (1935).
[62] See Napue v. Illinois, 360 U.S. 264, 269 (1957).
[63] See, e.g., United States v. Agurs, 427 U.S. 97, 103 (1976) (noting the use of perjured testimony which the prosecutor “knew, or should have known”; cf. Giglio v. United States, 405 U.S. 150, 153-155 (1972) (violation found when witness testified falsely and prosecutor was in position to have been aware).
[64] See, e.g., United States v. Wall, 389 F.3d 457, 473 (5th Cir. 2004).
[65] 567 U.S. 709 (2012) compare with e.g., Falwell, 485 U.S. at 62 (false speech “particularly valueless”).
[66] Alvarez, 567 U.S. at 718-728 (Plurality Opinion); id. at 734-735 (Breyer, Kagan, JJ., concurring).
[67] Rosenberger v. Rector and Visitors of Univ. of Va., 515 U. S. 819, 828–829 (1995).
[68] See Alvarez, 567 U.S. at 718-728 (Plurality Opinion); id. at 734-735 (Breyer, Kagan, JJ., concurring).
[69] 343 U.S. 250 (likely to lead public disorder); 315 U.S. 568 (likely to lead to breach of the peace).
[70] Garrison v. Louisiana, 379 U.S. 64, 67-78 (1964).
[71] Schenck v. United States, 249 U.S. 47, 52 (1919).
[72] Id.
[73] Marks v. United States, 430 U.S. 188, 193-194 (1977) (when fragmented Supreme Court decides a case, the narrowest concurring opinion(s) represent(s) court’s holding).
[74] Alvarez, 567 U.S. at 735.
[75] 18 U.S.C. §1038.
[76] United States v. Braham, 520 F. Supp. 2d 619, 628 (D.N.J. 2007); United States v. Keyser, 704 F.3d 631, 638 (9th Cir. 2012).