By: Nicolaine M. Lazarre**
If you’re not paying for the product, you ARE the product. I hear that trope in many contexts. In this day and age, there is still the perception that having to go through a paywall provides some sort of protection for someone’s privacy. As if shutting down one’s Facebook or LinkedIn account will resolve the serious erosion of the fourth amendment practiced by the government and the private sector alike. We should be thinking of privacy in terms of gatekeeping. If there is a gate (whether paid or not) between a user and a service, chances are that the gatekeeper is profiting of off that user’s data.
The commoditizing of our data is a risk of any transaction that directly or indirectly involves a digital intermediary. That covers a wide gamut of options from the obvious, such as wireless devices and car sharing apps to ISP providers, smart TVs, laptops, debit card providers, wireless printers to, by the way, blue-tooth enabled vibrators! It’s even a risk with those vendors we don’t think of that release “anonymized” information such as genetic testing companies or payroll processing companies. All have the capacity to take the digital handshake they receive from us and extend it into a long term committed relationship that comes with a tiny bit of stalking and a little dash of, well, trafficking.
Data may just be retained for indefinite periods of time. Most likely the gatekeeper is also aggregating that information with other information they’ve obtained about a user, selling the information, sharing it with law enforcement, or attaching tracking codes, beacons or serial IDs to the devices or media used to access the good or service. All to achieve such lofty goals as figuring out what type of toilet paper the user is most likely to buy or what kind of pop up ads will best enhance their attempts to read the latest Kardashian update. Since I frequently shop with debit cards, I’m half expecting future pizza boxes to come pre-printed with adverts for my favorite face cream.
Providing information in one context can be treated by gatekeepers as carte blanche to use such information in numerous other ways. 3 years ago, in connection with a security clearance for a Whitehouse visit, a friend was required to enter her personal information, including her social security number, on a web form. However, information about her visit now pulls up on online searches of her name. What public purpose does it serve to make an ordinary voter’s visit to the White House publicly available? Not clear. But her consent was not requested.
Finally, there is no ability for users to manage their own digital trail. I cannot call up Netflix, for instance, and say, please doesn’t store my viewing history in your database for longer than 1 year (I know, I tried). As a lawyer, I cannot call Lexis and ask them to delete my research history for a particular client (I know, I tried).
Right now, as the law is structured, most every business we interact with digitally has the right to broker our information in some way, without our control or knowledge, whether done responsibly or not. And while Facebook and LinkedIn give us some ability to control our image, most of the gatekeepers that we pay for services do not. An ethical actor may exercise some self-restraint. But what about when the information changes hands? Can one’s digital trail, including our image, be used to identify our ethnicity and religion and target us for extra surveillance? Can it be used to “predict” whether we will break the law or make a healthy employee? Can it be combined with our zip code and information regarding our neighbors’ behavior and be used to deny us credit…or parole? Are there any strong and enforceable restraints existing in place at the moment to prevent such practices? What is the long-term impact of data brokering on communities of color? Is there a right to digital due process? These important questions are not being asked with the same fervor that technological innovations are hitting the market. And, to be clear, none of these questions are clearly answered by our existing legislative framework.
We now have a weak and inconsistent patchwork of laws at the federal and state level. These include Section 551 of the Cable Communications Privacy Act of 1984, the Electronic Communications Privacy Act, the Fair Credit Reporting Act, the Fair Debt Collection Practices Act, the Stored Communications Act, the Health Insurance Portability and Accountability Act, and a plethora of individual State data breach statutes. An in depth treatment of the regulatory framework is beyond the scope of this article. In general, our laws focus more on security, disclosure and remedies then on imposing accountability, prohibiting discrimination and setting limits. For instance, a recent Forbes article pointed out that “although the Stored Communications Act was originally intended to create additional protections for owners of electronic data, it is easier for law enforcement to obtain most private communications and other electronic content under the SCA than to obtain a warrant for more traditional correspondence or papers.” Another shortcoming in many of these statutes is a loophole for “anonymized” information or non-content information. These loopholes do not reflect how data is created, shared, stored or aggregated today. And of course, many such statutes do not prevent sharing information with partners and affiliates, however broadly this may be defined.
If our lawmakers and elected officials had the capacity or the will to properly address privacy in the age of big data, they would have done it a decade ago. Right now, they seem to be playing catch up. Partly, in my view, because they have benefitted – for financial gain or for law enforcement needs – from gathering and sifting through troves of information gathered from individual citizens. How do you reconcile prosecuting phone companies for tracking consumers on smartphones and working closely with those companies to access those consumers’ data? Further, from the post office to the DMV, state actors make money from selling our data. How can a state actor restrain a Spokeo or an Intellius when it is most likely the source of the information being peddled? Finally there is no clear constituency to hold lawmakers accountable because the public still does not fully understand or appreciate the scope of today’s surveillance. Due to the inherent conflicts presented, and the rapid pace of technological advancement, we cannot rely on legislators to get it right.
Recent developments give me the sense that we are trending in the opposite direction. Case in point: the FCCs decision to repeal (rather than expand) broadband privacy rules and the Trump administration’s efforts to dismantle the Consumer Financial Protection Bureau. Beginning with Dwolla, an online payment system company, CFPB has been actively using its authority to prosecute unfair and deceptive acts and practices that impact consumer privacy. Michelle Bachman has already introduced a bill to repeal the Dodd-Frank Wall Street Reform and Consumer Protection Act, which created the CFPB. It is not clear what will happen to regulation of consumer financial information upon repeal, since authority to issue regulations and take enforcement actions under the two major customer financial information statutes had been transferred to the CFPB.
But if legislators don’t have the political will to provide robust, EU style protections for privacy, what about the private sector? We definitely cannot expect data gatekeepers to self-police although there are great efforts to create industry standards such as the PCIA. However, I am a big believer in the private sector. I think it does have the capacity to provide solutions for those willing to pay more for their privacy. From crude approaches such as Abine’s DeleteMe to sophisticated tools such as Bitcoin and Virtual Private Networks, there is an existing and growing range of options available to privacy conscious customers. What would help the robust creation and proliferation of these technologies is a non-discriminatory, technology neutral framework that provides consumers the right to use privacy enhancing devices, services and technologies, without being denied service by providers. While this may seem straightforward, it is not a given.
There are many barriers against technologies or practices that enhance privacy. From restrictions on disposable phones to taking $100 bills out of circulation, consumers are being increasingly steered away from fungible currency and anonymous purchases towards a network of tracked transactions. There are increasing efforts to limit the ability to use burner phones anonymously such as the Pre-Paid Mobile Device Security Gap Act of 2016 which would have required buyers to provide their full name, home address, and date of birth, as well as ID, a W-2 form, or another “acceptable” document in order to purchase Pre-Paid phones. Companies such as Amazon, New York Times and Netflix block the use of gift cards – which provide a great way to make small purchases anonymously. Facebook and Apple have aggressively fought for the right to prohibit the use of fake identities – ostensibly to prevent trolling, impersonation and cyberbullying. Apple defended a challenge to its practice of requiring Personal Identification Information in order to conclude a sale and installs Unique Device Identifier or UDID on all their devices. Many online vendors require consumers to create accounts, enter detailed profile information and store credit card information in order to conduct even micro-transactions.
Some restrictions on these technologies and services are somewhat justifiably triggered by a desire to discourage money laundering, tax evasion, terrorism, and other law enforcement concerns. I say somewhat justifiably because I don’t quite understand why warrants are no longer sufficient. Other barriers are motivated by the incredible competitive value of consumer data. However, since our regulatory framework has not caught up to prevailing data tracking and data brokering practices, we need a technology neutral framework that prohibits discrimination against privacy-enhancing technologies. The first step to such a measure is an informed public that moves from hyper focused attention to the practices of free social media platforms to a more comprehensive understanding of data gatekeepers.
** The Author’s opinions are her own and do not reflect the positions of her employer.
Nicolaine Lazarre is Senior Vice President and General Counsel of the National Urban League, where she provides legal counsel and strategic guidance on risk management, regulatory and compliance issues and contract activities of the organization. She is an expert in non-profit compliance and project management. To learn more about Nicolaine or follow her work, follow her on Twitter, Facebook, or LinkedIn.