Given the outcry from the left over Cambridge Analytica, you wouldn’t suspect that many left-of-center privacy advocates recently made a cause célèbre out of the free leveraging of your personal information by unseen agents, in a 2017 case pending before a federal district court in California.
The arguments set out in hiQ Labs v. LinkedIn offer a pre-Cambridge Analytica glimpse into progressive rationales for data profiteering, and led to oral arguments in March before the Ninth Circuit Court of Appeals.
The controversy erupted last spring when hiQ, a venture finance company, asked a court to order LinkedIn to allow the company to access information from the site. The resulting district court order gave little weight to hiQ’s possible violations of user privacy, honing in instead on the company’s right to leverage and exploit user information obtained by LinkedIn. In an appeal by LinkedIn of the refused injunction, a LinkedIn amicus put it this way: “Personal data is central to this case, even though users are not represented in this proceeding.”
Since the original filing, that centrality has been reinforced by the recent barometric drop in popularity of social media. The attention being paid to data profiteering as a result of the left’s politicization of the Cambridge Analytica disclosures has flushed out contradictions in its support of an “open internet” — a support that often negates individual choices and expectation rights. Such contradictions are on parade in this case. Tellingly, new regulatory and congressional oversight purporting to tie data aggregating to election tampering may do nothing to reverse the perceived balance of the equities that the lower court determined back in August lay with hiQ, in granting the original injunction.
LinkedIn Member Data Props Up hiQ’s Business Model
HiQ, with $14.5 million in new investment, is in the business of statistically analyzing publicly available workforce information and selling it to employers, supposedly to help them reward and retain in-demand employees. LinkedIn is a well-known professional networking company. HiQ obtains the raw data it needs by accessing and “scraping” certain LinkedIn member profiles. Although the use of automated scripts violates LinkedIn’s Terms of Service, hiQ denies any wrongdoing.
The denials are very progressive sounding. One justification hiQ gives is that its analytics benefit LinkedIn members by keeping their employers apprised of their worth in the marketplace. Employers will supposedly increase the salaries of those whom hiQ identifies as “flight risks.” Elevating free speech above contract, hiQ also asserts a right of access to member data based on recent free-speech cases decided under the California Constitution.
In those stand-out precedents, private properties, including malls and websites, are treated as equivalents to the “public square,” where all citizens have the right to participate in “expressive communities.” (Also see here.) The idea is that the data hiQ scrapes is, or should be, available to third-party harvesters who aren’t malicious actors. More below on telling the difference.
Meanwhile, LinkedIn, for its part, calls what hiQ is doing “exploitative free riding” and has accused hiQ of, among other things, violating the Computer Fraud and Abuse Act (CFAA). This cause of action is thanks in part to the relative weakness of other options. It is unclear that LinkedIn has title to members’ information, which weakens its claim of copyright infringement. But its resort to the anti-hacking CFAA is controversial, as the statute contemplates physical trespass onto another’s computer and carries possible criminal sanctions. Advocates of an “open internet” have mobilized against it, leaving privacy concerns on public-facing websites liked LinkedIn in the dust.
‘I Agree’ … To What Exactly?
Although this case does not involve quizzes, psychographs or misleading “friend permissions,” it raises some of the same questions as Cambridge Analytica about what a user is actually consenting to when he or she “agrees” to the parlaying of personal information to unknown transferees. The Electronic Frontier Foundation (EFF) has been one of the more notable voices speaking out against Facebook and its facilitation in the C.A. affair of “user surveillance.”
Yet a few short months ago, it filed an amicus brief on behalf of hiQ in LinkedIn’s appeal to the Ninth Circuit of the injunction order issued in favor of hiQ allowing its scraping to continue. As the EFF stated in its brief, quoting Orin Kerr, “With ‘no authentication requirement, the web server welcomes all, and the norm is openness to the world’ — including ‘any one of the billion or so Internet users around the world’ or ‘bot[s]’ … running automatically.”
This shows how privacy activists, especially anti-Trump ones, argue out of both sides of their mouths. On the one side, they emphasize informed consent by the data subject; on the other, they promote mass-data collection in the name of a self-servingly defined public good.
Here, hiQ and its supporters rely on one oft-touted fact: hiQ purportedly collects publicly available data only. Recall that LinkedIn members select from a range of possible privacy options that promise to restrict viewability of their profiles to enumerated designees, or not. If the latter “public option” allows access to anyone on the internet, including Google and other search engines, hiQ claims it deserves as equal technological access to public profiles as search engines, because it helps members achieve the maximum exposure they signed up for. It accuses LinkedIn of acting anti-competitively against consumers’ interest by selectively foreclosing its access.
But LinkedIn counters that its duties as platform owner supercede hiQ’s “different in kind” accessing of publicly viewable information. HiQ’s vacuuming of mass amounts of user data, it argues, leads to privacy violations, including tracking of so-called “Do Not Broadcast” edits to member profiles. This argument ran aground at the preliminary injunction stage due to what the court regarded as a lack of empirical evidence that members minded.
At trial, LinkedIn will have the opportunity to present further evidence, but the issues closest to Cambridge Analytica have yet to be enunciated. How many users are able to envision the kinds of modeling that machine programs enable? What if hiQ’s analytics should unfairly characterize the marketability of an employee’s skills, leading to demotion or firing? Users might well have hesitated in selecting the public option had they understood the ramifications. Even when users have sophistication in analytics, they might still rely on the prohibition against bots in LinkedIn’s terms of use, or merely on their comfort with the “brand,” to feel safe from such weaponization.
Notably, the district court nixed these arguments; refused, in other words, to honor members’ reliance on LinkedIn’s user agreements to protect themselves from possible untoward effects of big-data manipulation. In contrast to the orchestrated furor abroad against Facebook and Cambridge Analytica, the liberal court declined to recognize LinkedIn’s fiduciary duty to its sharing members. The fact that hiQ has no accountability whatsoever to users suggests that in the liberal courts, traditional contract rights may be giving way to other interests. These interests are aligned with, as the district court approvingly put it, “the ability to find, aggregate, organize, and analyze data.”
‘Good Bots’ Supposedly Make Privacy Violations All Right
The district court bypassed whether the use of predictive algorithms such as those employed by hiQ can impair a person’s life chances, for example, when he flunks an employer’s or creditor’s optimization scores. It focused instead on actual member complaints (of which there were few) as well as ways LinkedIn’s practices lead to similar glitches in the “Do Not Broadcast” category as hiQ’s. It may be that the eventual but unforeseen monetization of data by third parties to which we are all vulnerable is too general or speculative an objection to data mining for courts to consider.
Still, this case demonstrates that concern for privacy and data “repurposing” is selective. Summarizing EFF’s different positions here and in connection with Cambridge Analytica, we’d have to say that automated tools are good when they are used by researchers to audit for racial and sex discrimination, but bad when they employ psychological profiling to persuade people to vote for Donald Trump. Indeed, proposed privacy legislation has a left-liberal bias that would do little to safeguard individual fortunes in the hiQ context.
As a society we are legitimately wondering whether informed user consent can really protect us. But what is more worrisome is that here, if LinkedIn loses, user consent will have ceased altogether to be a benchmark of best practices. This is far from reassuring. Even with more privacy regulation on the horizon, there is no guaranteeing the “intelligence” gathered on us is information we’d ever knowingly entrust to the often partisan actors who paternalistically claim the right to control it.