Tech Brings Authentication Challenges In Ad And IP Cases

Share

The ability of any individual, without access to sophisticated technology, to decipher the “authenticity” of any experience is diminishing daily. Moreover, this threat to the integrity of the law goes beyond digital impersonation and “deep fake” software driven by artificial intelligence. The famous Marx Brothers line, “Who ya gonna believe, me or your own eyes?” was once funny because it was ridiculous. Soon, it will be a description of our jobs and our lives.

While some industries have long been sensitive to the pitfalls of existing authentication protocols, recent advances should raise concerns about whether the legal profession is prepared for what is about to happen to the culture. These concerns are not entirely new, of course, and they are not restricted to whether we can trust the source of a funny meme we found online. Rather, these developments point to a larger anxiety and a broader problem with the way law interacts with these new technologies. Augmented reality, social media, AI-powered editing software, bots, influencer campaigns, immersive advertising — the common theme shared by each of these recent innovations is their ability to undermine bedrock concepts of “truth” even when we are not online.

For lawyers, this slow drift toward a curated reality ripped from “The Truman Show” can feel particularly troubling. Law as an institution is predicated upon the idea that there is an underlying truth to be discovered and protected. It is self-evident that a legal system designed to hand down permanent judgments based on a systematic process established to distinguish “true” things must become equipped for an insidious “fake” culture. But as we will see, the law isn’t ready, and practitioners and their clients have barely begun to come to terms with the ramifications of our new world.

Nowhere is this deficit more apparent than in the fields of advertising and intellectual property law — two disciplines closely tied to these new technologies and, ironically, uniquely predicated on ascertaining “reality.” A simple review of some of the more superficial developments in the space can lead to only one conclusion: unless we construct a new taxonomy of authenticity and trust within the law, and soon, our institutions and our profession will be overwhelmed by even more extreme technology visible just over the horizon.

Augmented Reality

Intellectual property is often described as the “opposite” of physical property: something that we can own or control that has no tangible substance. It is a legal construct, rather than a physical place or a thing we can hold in our hand. But this framing isn’t exactly right, as the value of intellectual property is only relevant when attached to a product or a physical process in the “real” world. Patents are infringed when a product is made, used, imported or sold. Trademarks are infringed when a product or service uses a trademark in a manner likely to cause confusion. Copyrights are infringed when an original work of authorship is copied.

But what if we told you that a nearly identical world existed as an overlay, plotted 1:1 against everything you see when you look around you? And maybe that world isn’t one world, but dozens or hundreds or millions of worlds; digital doppelgangers filled with additional information you could access seamlessly through viewers that instantly augment your own physical vision. This “mirrorworld,”[1] as described recently in Wired magazine, may sound like the plot to a Neal Stephenson novel, but it is already here in a fairly primitive form. Pokémon Go, anyone?

And as soon as cheap eyewear (or contact lenses) become viable and broadly available, the technology already exists to pump a near facsimile of our own world directly into your optic nerve,[2] in real time. But what are the laws that apply in a world just like ours, except owned and controlled by a third party? And more importantly, how can you trust what you see? Imagine scores from China’s “social credit system” — recently used to ban 23 million citizens from traveling[3] — overlayed atop human beings moving about the physical world. (Or instead of imagining, take in the unsettlingly familiar world depicted by Netflix in the “Black Mirror” episode “Nosedive.”)[4]

When we look at some of the baby steps we have already taken toward an artificial immersive world, it becomes clear that the legal implications of such a synthetic existence will be difficult to manage.

Influencers

Our first example from this new world is a familiar one, and an “easy” one, but even something as simple as a celebrity endorsement can quickly become troubling in the context of our larger crisis of confidence in our senses.

It is already a cliché to say that influencers are the new news. They are the new branding. But, in the words of a certain superhero, with great power comes great responsibility. And how authentic are influencers and the opinions they express? In May 2016, Kardashian-adjacent influencer Scott Disick posted to Instagram the following photo caption, verbatim: “Here you go, at 4 PM EST, write the below. Caption: Keeping up with the summer workout routine with my morning @booteauk protein shake!”

In other words, Disick inadvertently copied and pasted into his own post caption the instructions he had received from his sponsor’s marketing rep. He later revised the post, most assuredly leaving a trail of exploding heads at Bootea in his wake.

There are two ways to look at this scenario: as a simple regulatory matter, or as a slippery slope. The Federal Trade Commission is the agency generally charged with ensuring truth in advertising. They have rules about online endorsements in place. Either an influencer follows these rules or they don’t — and if they violate guidance from an active regulator there are ramifications. As a technical matter, we can note that Disick’s post failed to comply with FTC endorsement guidelines because it lacked the required hashtag (such as “#ad”) disclosing a material connection between Disick and Bootea (i.e. that Bootea pays Disick). A regulator can look to this violation and force a correction. This rule-based approach has served us well in the past, and we can feel comforted by it today.

But things become a bit more troubling when we start asking a different question: what exactly is Disick’s “truth”? Let’s even assume Disick had posted the required disclosures. Would all be right with this endorsement? This rare behind-the-scenes look suggests that the answer may be “no” — especially in a world where tens of thousands of influencers send out millions of individual social media posts and the line between any one person’s “real” life and their existence as a commercial endorser becomes harder to parse.

In an advertising saturated world reminiscent of reality television, legal compliance may not equal authenticity. And isn’t that what false advertising rules are supposed to ensure: authentic, consumer relevant information? Does Disick actually incorporate the product into his morning workout routine like he says? What instructions has he received? Does that make any of this less “true,” or is it baked into the experience — in other words, is it presumed to be fake, and we’re all simply playing along?

In this atmosphere, should we ask whether disclaimers really matter at all — does a hashtag designed to ensure that consumers understand material connections with an endorser serve a useful purpose when the entire point of an influencer today is to erase the distinction between trustworthy opinions and puppetry?

Bots

As New York magazine famously noted[5] late last year, some web engineers fear the coming of “the Inversion”: the point at which fraud detection systems for popular websites will begin regarding bot activity as real human activity, and human activity as fake.

By some metrics, the Inversion may already be here. Bots are everywhere, and their sophistication and frequency is increasing exponentially. Last year, a Pew Research study[6] estimated that 66 percent of all tweets sharing links to popular websites and articles were shared by bots, and that the most active 500 Twitter bots were responsible for 22 percent of all tweeted links. In 2016, the security firm Imperva Inc. reported[7] that bots on the internet accounted for a majority (52 percent) of all internet traffic. A 2018 report[8] by GlobalDots LLC put forth a smaller overall number — 42.2 percent of all traffic — but a larger number (62 percent) with respect to bot traffic on the world’s largest websites.

And bots have power. They are manipulating elections[9] and hijacking debates on important policy issues.[10] And they are mercenaries for hire, bought and sold to the highest bidder. Brands that underestimate them face serious consequences. Points North Group, an influencer marketing analytics specialist, attempted to measure the financial impact[11] of bots in influencer marketing and found that of the $150 million in the U.S. and Canada in Q2 2018 spend by brands on influencers, $11 million — over 7 percent of that total — was lost to bots. The same study found that nearly half of followers for certain brands’ sponsored Instagram posts were fake.

Some brands are tackling this challenge to authenticity by attempting remove the bot footprint their marketing. Unilever’s CEO Keith Weed made waves in June 2018[12] when he announced that the company would no longer work with influencers known to fake followers or bots to grow their accounts. Unilever’s decision has been widely applauded — not only does it make sense to crack down from a brand authenticity perspective, but it makes financial sense. Why pay influencers for followers that aren’t potential customers? Expect “bot clauses” to become standard fare in social media influencer contracts.

And while brands crack down, some governments are stepping in too. California’s SB 1001,[13] signed into law by Gov. Jerry Brown in late 2018, bans bots from pretending to be human by requiring companies who use chatbots to conspicuously disclose the nature of the bots in interactions with humans. The law is intended primarily to expose commercial and political bots, but its scope may sweep in certain “helpful” bots as well (e.g. bots helping you troubleshoot a computer problem, and in the process offering you a piece of software for purchase). And while it will do nothing to slow the inevitable onset of the Inversion, laws like this may be the first step in making what’s coming more palatable.

Consumer Reviews

For years the organization Trampoline Safety of America reviewed and rated the safety of various trampoline brands. Two favorably rated brands (described as among “the best trampolines available on the market today”) were the Olympus and Infinity brands sold by Anaheim-based Sonny Le and Bobby Le. This distinction afforded Sonny and Bobby the right to display the Trampoline Safety of America seal on their sales websites, with some versions of the logo bearing an additional “Trampoline of the Year” flourish.

The problem? Trampoline Safety of America was not an independent, third-party engineering organization as it claimed; it was an organization masterminded entirely by Sonny and Bobby Le. As were another organizations the duo conjured up and created websites for — “The Bureau of Trampoline Review” and “Top Trampoline Review” among them. As were certain individual commenters on third-party blogs and websites who touted the duo’s products and disparaged those sold by competitors.

The blatantly deceptive nature of In the matter of Son Le and Bao Le[14] made it an easy case for the FTC to decide. But the case is important because it highlights just how creative fraudsters can be in utilizing available technology, and how difficult it can be to figure out what is actually “real” in an electronic marketplace. Savvy consumers will often attempt to identify fake product or service reviews by corroborating suspected fakes across multiple platforms. But even the most diligent of safety-oriented parents who had cross-checked Trampoline Safety of America’s findings against those of the Bureau of Trampoline Review and “Top Trampoline Review” would have reached the same conclusion — that the Olympus and Infinity products were among the safest around.

And creating fake websites to boost one’s own products is only the tip of the iceberg. On Feb. 26, 2019, the FTC announced its first case challenging a marketer’s use of fake paid third-party reviews on an independent retail website.[15] According to the FTC’s complaint, the defendants had advertised and sold appetite-suppressing, weight loss pills on Amazon, and paid a third-party website[16] to create and post five-star reviews of its products such that the products would not fall below an average rating of 4.3/5 stars. The defendants stipulated to a settlement.

Some regulatory officials have referred to this phenomenon as a tongue-in-cheek “insider rating” problem. And make no mistake, there are many existing rules to deal with this problem. For example, the FTC and state attorneys general can address clear instances of fraud under their authority to regulate unfair and deceptive trade practices.

The Consumer Review Fairness Act, signed into law in 2016, attacks the problem from the opposite direction: by removing impediments to truthful consumer counter-speech. The CRFA bans the use of gag clauses in non-negotiable consumer form contracts. In other words, it is improper for companies to restrict their customers from sharing honest opinions about the company’s products, services or conduct in any forum, including social media.

The CRFA is still fairly new, however, and enforcement has been limited. In August 2018, the FTC alleged its first CRFA violation[17] against the proprietors of a “Sellers Playbook” venture, a commission-based system for business opportunities. But in that case the FTC also alleged that the earnings claims used to pitch the system were fraudulent, raising the question of whether the FTC will still be willing to bring claims under the CRFA where there are no allegations of underlying fraud.

Specimens

Even more “traditional” intellectual property practices — trademark prosecution, for instance — are not outside the reach of a counterfeit culture. The U.S. Patent and Trademark Office has always struggled to deal with applications that are “fake” in some way: applicants without a “bona fide” intent to use their marks in U.S. commerce (maybe they are foreign applicants paid by foreign governments to obtain U.S. registration), applicants claiming they offer more goods and services than they actually do (maybe they own a registration for “meats; jams; olives” but only sell olives), etc. To combat these “fake” applications and registrations, the Trademark Office has adopted countermeasures, like tightening up examination standards, or launching in November 2017 the “post registration proof of use audit program.”[18]

But technology has compounded the problem. Proof of use of a mark in U.S. commerce is a prerequisite necessary to obtain a U.S. trademark registration (or to maintaining a U.S. registration based on a foreign registration). Advances in digital imaging technologies have made it nearly impossible to discern between a computer-generated product image (not permitted), and an actual touched-up product photograph (permitted). Anecdotal experience reveals that trademark examining attorneys are often so wary of CGI images being submitted as specimens, that even an inkling of digital editing will cause them to issue an Office action refusing registration and requiring a substitute specimen.

Even more pernicious is the practice of downloading actual photographs from the internet and then digitally altering them to match the applied-for branding. The scope of the problem has become so troubling of late that the USPTO established a pilot program protest “hotline” (really, a dedicated email inbox) for users to report digitally altered or fabricated specimens.[19] Admirable as such an effort is, however, it begs the question: if the USPTO can’t itself determine whether a specimen is fabricated, how can we?

Deep Fakes

Perhaps the most emblematic development of the post-truth era is the “deep fake” itself, AI controlled, digitally altered content (often video or audio) designed to seamlessly mimic a “real” person, but spewing fake content, or engaging in fake behavior. Originally a creature of porn and political dirty tricks, as the technology improves the implications for law become increasingly distressing. When combined with the promise of augmented reality, our ability to delineate the “real” actions of others from fake ones will become distressingly difficult.

There are two areas where this development has particular resonance — as a matter of substance and as a matter of legal procedure. Substantively, it is clear that the law needs to consider new ways to define and protect the “real” identity of both people and corporations.

Should the notion of “libel per se” be expanded in order to cover subtle alterations in your digital profile, irrespective of demonstrable harm? Will the remedies available for trademark infringement and false advertising require expansion to include automatic fee-shifting and statutory damages so that the mischief of digital editing doesn’t become only the province of those with the resources to defend the “reality” of their actions, even when the “damage” is far more difficult to assess? Will the accessibility of First Amendment defenses to federal intellectual property causes of action be narrowed in deep fake cases, or will claims become the exclusive province of state torts like false light and right of publicity? The incremental harm of digital edits to the substance of your life may not be individually enormous, but could easily become insidious if left to fester.

The procedural issues are perhaps even more intricate. In litigation, the authenticity of the evidence is of paramount importance. But how do you prove that a contract is real, or that a document is real, when it can be realistically generated by artificial intelligence — complete with fake trails of evidence? Computer scientists and neuroscientists alike are confronting ever more sophisticated ways to manipulate language and text[20] to create invented content with all of the indicia of being authentic. When future courts are faced with contract documents that have been subtly altered, or emails that have been changed, they may not be able to so easily tell the difference — especially when they have existed entirely in an electronic environment.

We may comfort ourselves with the idea that “experts” can be used, but think about how often, each day, a lawyer must rely on their own eyes to determine the truth of a client’s position, or the strength of an opponent’s claim. How can due diligence in a deal be satisfied when complex tools are necessary in order to determine the truth of even the most basic facts? What changes to the rules of evidence may be necessary in order to deal with a radical change in what we can presume about any document placed in front of the court? The problem is that lawyers deal with facts each day in a perfectly ordinary way, while new technology demands that each fact be proven like an undergraduate philosophy problem.

The promise of blockchain[21] and other crypto-authentication technologies suggest that some new tech can play a significant countervailing role — demonstrating the chain of custody of a document, or the authenticity of a signature. But even these technologies have practical limits unless they are applied to nearly every single thing in our world that could conceivably become relevant evidence in a future (unknown) legal dispute.

And while recent efforts to test whether the current rules of evidence can support blockchain ledgers as evidence[22] (and statutes expressly deeming them self-authenticating such as Vermont’s 12 V.S.A. §1913),[23] are promising, even these modest successes demonstrate the massive amount of discovery and prep work that will be necessary to educate fact-finders on the technology and support inferences of authenticity, and there is a lack of precedent on how resistant this kind of evidence will be to hearsay objections in a variety of different forums.

More disturbingly, while recent rulings dealing with the self-authentication of electronically generated records — like the 9th Circuit’s decision in United States v. Lizarraga-Tirado[24] — are an auspicious sign, the coming world of quantum computing threatens to undermine the encryption framework on which these entire approaches depend in perhaps only a few short years.

The Answer

There are, of course, no easy answers. Moreover, direct neural-computer interfaces and other futuristic technology currently in development could lead us straight from the plot of “The Truman Show” to the plot of “The Matrix” in a decade. But there is little question that laws need to be changed, and the legal profession needs to be sensitized. The next few years will likely see laws introduced with causes of action akin to defamation, directly addressing digital impersonation, but without the need for proof of harm. New standards for contracts that build in authentication processes. New rules for advertising that use augmented reality to reveal more detail about the relationship between influencers, endorsers and brands. And courtrooms where independent expertise is used by judges to frame the reality of the evidence in every case.

As Niels Bohr once said, “predictions are hard, especially about the future.” We should always embrace a certain humility about our own ability to recognize the often tectonic shifts that unexpectedly arise from new technology. But unless we systematically begin to confront the reality that has already presented itself to us today, as a profession, we risk becoming useless in a future where truth becomes a choice.

Darren S. Cahr is a partner and Tore Thomas DeBella is an associate at Drinker Biddle & Reath LLP.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of the firm, its clients, or Portfolio Media Inc. or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

[1] https://www.wired.com/story/mirrorworld-ar-next-big-tech-platform/

[2] https://en.m.wikipedia.org/wiki/Google_Glass

[3] http://fortune.com/2019/02/22/china-social-credit-travel-ban/

[4] https://www.youtube.com/watch?v=R32qWdOWrTo

[5] http://nymag.com/intelligencer/2018/12/how-much-of-the-internet-is-fake.html

[6] http://www.pewinternet.org/2018/04/09/bots-in-the-twittersphere/

[7] https://www.incapsula.com/blog/bot-traffic-report-2016.html

[8] https://www.globaldots.com/2018-bad-bot-report-the-year-bad-bots-went-mainstream/

[9] https://venturebeat.com/2016/10/04/are-political-bots-stacking-the-deck-in-the-presidential-race/

[10] https://www.wired.com/story/bots-broke-fcc-public-comment-system/

[11] https://www.chiefmarketer.com/bots-eating-one-third-brands-budgets-instagram-influencers-report/

[12] https://www.adweek.com/brand-marketing/unilever-says-no-more-fake-followers-and-bots-influencers-cheer-and-question-the-future/

[13] https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB1001

[14] https://www.ftc.gov/enforcement/cases-proceedings/162-3178/son-le-bao-le-matter

[15] https://www.ftc.gov/news-events/press-releases/2019/02/ftc-brings-first-case-challenging-fake-paid-reviews-independent

[16] http://www.amazonverifiedreviews.com/

[17] https://www.ftc.gov/news-events/blogs/business-blog/2018/08/first-consumer-review-fairness-case-takes-promoters-big

[18] https://www.uspto.gov/trademarks-maintaining-trademark-registration/post-registration-audit-program

[19] See here for what protesters should include.

[20] https://arstechnica.com/information-technology/2019/02/researchers-scared-by-their-own-work-hold-back-deepfakes-for-text-ai/

[21] https://www.forbes.com/sites/davidblack/2019/02/04/blockchain-smart-contracts-arent-smart-and-arent-contracts/#212dea781e6a

[22] https://www.law360.com/articles/1131844

[23] https://legislature.vermont.gov/statutes/section/12/081/01913

[24] http://cdn.ca9.uscourts.gov/datastore/opinions/2015/06/18/13-10530%20web%20corrected%202.pdf

Originally published by Law360 on March 5, 2019.

The material contained in this communication is informational, general in nature and does not constitute legal advice. The material contained in this communication should not be relied upon or used without consulting a lawyer to consider your specific circumstances. This communication was published on the date specified and may not include any changes in the topics, laws, rules or regulations covered. Receipt of this communication does not establish an attorney-client relationship. In some jurisdictions, this communication may be considered attorney advertising.

mm

About the Author: Tore T. DeBella

Tore T. DeBella is a partner in the firm's Intellectual Property Practice Group. Tore’s practice focuses on trademark clearance, portfolio management and enforcement, as well as information technology and data privacy/security strategy and compliance. Tore’s unique blended practice offers significant value to his clients, as he is able to counsel on both the “brand value” and “data” implications of various cutting-edge technological issues like social media, website policies and terms, keyword advertising and domain names.

©2024 Faegre Drinker Biddle & Reath LLP | All Rights Reserved | Attorney Advertising.
Privacy Policy