We have written several times here over the last few years about data minimization being an important part of an effective cybersecurity program.  For most companies, the total amount of data that they control grows substantially each year, and more data generally creates more data protection risks.  Companies that have implemented effective data minimization programs are careful to collect only the data that they are likely to use, and routinely get rid of old data that they no longer need, thereby significantly reducing their data protection risks.  A recent enforcement action by the Berlin Data Protection Commissioner echoes recent U.S. regulatory developments in suggesting that companies without data minimization procedures face not only increased cybersecurity and privacy risks, but also regulatory risks—ones that can lead to penalties even when they don’t lead to a specific cyber incident.  In other words, data minimization is becoming a stand-alone regulatory obligation, in addition to being a key component of cybersecurity best practices.

On October 30, 2019, the Berlin Commissioner for Data Protection and Freedom of Information fined Deutsche Wohnen SE, a German real estate company, 14.5 million Euros for violating Article 5 of the General Data Protection Regulation (“GDPR”), which mandates that the processing of personal data shall be “adequate, relevant, and limited to what is necessary in relation to the purposes for which [data] are processed.”  Specifically, the Commissioner determined that Deutsche Wohnen SE used an archive system to store of tenants’ personal data that it no longer needed, which included old salary and bank statements, as well as tax, social security and health insurance data.  These infringements were identified during an on-site inspection in June 2017 and despite explicit recommendations issued after the authority’s visit, Deutsche Wohnen apparently did not remediate the deficiencies by the time of its next inspection in March 2019.  Consequently, the supervisory authority issued a fine for the period between May 2018, when GDPR came into force, and March 2019.

There are two particularly noteworthy aspect of this action.  The first is the imposition of a substantial fine without evidence of any harm to the data subjects.  The fact that there was no misuse of the unneeded personal data was only viewed as a mitigating factor in determining the “effective, proportionate and dissuasive” fine amount, which was approximately half the upper legal limit of the company’s 4% annual turnover.  The second is that there is no suggestion that the company should not have collected this data in the first place.  Rather, it was fined for not getting rid of data that it rightfully possessed, but no longer needed—a criticism that can be made of many companies’ data management programs.  Deutsche Wohnen has filed an appeal against the enforcement action.

European regulators are not alone in treating data minimization as a stand-alone obligation. As we have previously discussed here, the New York Department of Financial Services Cybersecurity Regulations mandate that regulated entities maintain a data minimization program that includes procedures for the secure disposal of any nonpublic information that is no longer necessary for business operations and does not need to be maintained because of a legal or regulatory obligation.  And regulators in other states, particularly in the insurance industry, have also implemented rules mandating data minimization for regulated entities.  At the federal level in the U.S., the FTC has used its authority under Section 5(a) of the Federal Trade Commission Act, 15 U.S.C § 45(a), to sanction companies, including most recently InfoTrax Systems, L.C. on November 12, 2019, for (among other deficiencies) “fail[ing] to have a systematic process for inventorying and deleting consumers’ personal information . . . that is no longer necessary.”

But, despite the benefits for data protection and regulatory compliance of having a strong data minimization program, many businesses maintain vast quantities of sensitive information that they don’t need.  There are several reasons for this:

  • Data retention practices are often determined by corporate culture: expectations about what should be kept, what will be available, and how long it will take to get answers based on old data are not often set out expressly. Changing practice, therefore, requires an institutional re-orientation that can be challenging to implement.
  • Relatedly, data management issues cut across almost every business function and group within an organization, often with no one ultimately in charge. So getting rid of significant amounts of old data may require buy-in from senior personnel in business, risk, legal compliance, and IT, but without the necessary resources and authority to coordinate and accomplish such a difficult task.
  • With the rise of AI and machine learning, some companies feel that they should be keeping all of their data, in case one day they want to use it for internal purposes or perhaps sell it to third parties.
  • Because data storage is cheap (especially with cloud-based hosting), and because of improved search and indexing capabilities, it is usually seen as minimally burdensome to keep any particular data set about which people are unsure. Repeating that decision again and again over time, results in large increases in data for the company.
  • Because people frequently leave the company or change roles, or simply because of the passage of time, often no one at the company knows why a certain data set was collected, what it contains, what purpose it serves, or whether anyone may still need it.

Moreover, uncertainty over the precise contents of a large electronic data set often results in keeping it because of the concern that some of the data may be subject to litigation holds or regulatory preservation obligations, and no one is going to review millions of pages to ensure that no preservation obligation exists.

These are all difficult challenges for companies trying to implement effective data minimization programs, but they can all be addressed.

  • Regulatory requirements mandating data minimization can be the impetus that some companies need to devote the necessary authority and resources to determine what kinds of data they have, what data is being collected on an ongoing basis, what should be kept for business purposes, and what isn’t needed and can be deleted.
  • In terms of spoliation concerns in the U.S., recent changes to Federal Rule of Civil Procedure Rule 37(e) and decisions applying those changes demonstrate that U.S. courts understand that there must be a balance between the need to preserve documents for litigation purposes and the need for companies to get rid of old data to minimize cybersecurity and privacy risks. See Hefter Impact Tech v. Sport Maska Inc., 2017 WL 3317413 (D. Mass. Aug. 3, 2017), see also Martinez v. City of Chicago, 2016 WL 3538823 (N.D. Ill. June 29, 2016).
  • Indeed, advances in data analytics and machine learning are creating opportunities for companies to responsibly delete large volumes of old data, without having to review each document to determine if it must be retained for litigation purposes or for some regulatory obligation.

These issues, along with a step-by-step approach to responsible document deletion, were discussed in detail in our data minimization webcast here and below.

In sum, for companies with large volumes of historical sensitive information, that do not already have a data minimization program, careful consideration should be given to these issues.  Although getting rid of large volumes of old data comes with costs and risks, increasingly, so does waiting too long to start.

This article has also been posted at the Compliance & Enforcement blog sponsored by NYU Law’s Program on Corporate Compliance and Enforcement.

WEBCAST SLIDES

CLE FORM

Cases and Regulations

One way for companies to decrease their cybersecurity risks, as well as their risks from new privacy regulations, is through data minimization—significantly reducing the amount of their data.  By deleting old data and collecting less new data, companies will have less sensitive information to protect and process in accordance with their regulatory obligations.  But getting rid of old data isn’t easy, in part because of the legal limitations on what can be deleted.  We have previously written here about these challenges, as well as the benefits of data minimization, which include reducing:

  • the growth of a company’s data over time, and the associated storage costs;
  • lost productivity associated with searching large volumes of irrelevant data;
  • the cybersecurity and privacy risks of having large volumes of unneeded data, especially considering CCPA and GDPR-type rights of access and erasure;
  • internal audit and compliance risks;
  • contractual risks (e.g., obligations to clients and customers to delete data once it is no longer needed); and
  • the volume of documents that may be unhelpful to the company in potential, but not yet reasonably anticipated, litigation or regulatory inquiries.

As we noted, in light of these benefits and recent legal, regulatory, and technological developments, the current risks of keeping large volumes of old data may cause companies to reevaluate their long-term data management planning.  Indeed, various cybersecurity and privacy regulatory regimes, such as GDPR and the NYDFS Cyber Rules, now require companies to have policies for disposing of nonpublic information that is no longer necessary for business operations, unless those documents must be retained for legal reasons.

One challenge to implementing data minimization is the tendency of many employees to keep all of their emails unless they are forced to delete them.  To address this challenge, some companies are using tools that automatically delete emails after a certain short period of time—also known as ephemeral messaging.  In the past, many businesses have had a similar process for voicemails, such as automatic deletion after 30 days, but ephemeral messaging is relatively new for business emails, which are usually kept forever or a long period of time by default.

In March of 2019, the DOJ revised the provision on ephemeral messaging in its FCPA Corporate Enforcement Policy.  The original policy stated that a company would only receive full credit for remediation if it “prohibit[ed] employees from using software that generates but does not appropriately retain business records or communications.”  The revised provision requires that, in order to receive full remediation credit, companies must retain business records, and prohibit the improper destruction or deletion of business records, which includes implementing “appropriate guidance and controls on the use of personal communications and ephemeral messaging platforms that undermine the company’s ability to appropriately retain business records or communications.”

Although the revised guidance seems to open the door to some use of ephemeral messaging platforms, it provides little in the way of specifics as to what is expected.

Similarly, in December 2018, the SEC’s Office of Inspections and Examinations issued a Risk Alert reminding investment advisers of their recordkeeping obligations under Rule 204-2 of the Investment Advisers Act of 1940.  The alert was likely issued due to the increased use of ephemeral messaging for business purposes, but provided no clear guidance on when it should and should not be used.

In order to provide some guidance, the Sedona Conference currently has a working group dedicated to brainstorming best practices surrounding ephemeral data retention and minimization.

Bearing in mind that any company policy on ephemeral messaging must meet its particular facts and circumstances, we are aware of some companies that have adopted a pragmatic risk-based approach, based on the following principles:

Business Records vs. Disposable Data

Some companies are implementing data management policies that classify data as either “Business Records” or “Disposable Data.”  Business Records cover data that must either be kept for a significant period of time (a) for legal or regulatory reasons, or (b) because it has lasting business value.  Any document that is not a Business Record is Disposable Data, which is not subject to any legal or regulatory retention requirement and does not have sufficient business importance to be retained for an extended period.  These companies require that ephemeral messaging should only be used for communications that involve Disposable Data.  They achieve this by classifying communication as either Primary or Secondary.

Primary vs. Secondary Communications

“Primary Communications” are work emails (and their attachments), which are automatically preserved for a long period of time.  As a result, Primary Communications are the medium through which employees should communicate information that constitutes Business Records.  “Secondary Communications” are communications that go through the company’s network or servers, but are preserved for a short period of time (e.g., 14 days before being automatically deleted).  Secondary Communications, which are also referred to as ephemeral messaging services, may include voicemails, texts, instant messages, Slack, Symphony, and other company applications used for communications involving only Disposable Data, such as routine scheduling, other non-substantive business communications, and personal messages.

These companies direct employees not to use Secondary Communications to transmit Business Records, and if that happens, require employees to take affirmative steps to preserve the communication for the same length of time that Primary Communications are preserved (for example, by taking a screenshot and sending the image through the company’s email system).  Relatedly, these policies often require employees who find themselves to be passive recipients of Business Records via Secondary Communications to move those messages to a Primary Communication channel.

This approach is consistent with the guidance provided in the SEC’s December 2018 Risk Alert, which provides:

In the event that an employee receives an electronic message using a form of communication prohibited by the firm for business purposes, requiring in firm procedures that the employee move those messages to another electronic system that the adviser determines can be used in compliance with its books and records obligations, and including specific instructions to employees on how to do so.

Some companies also have regular reminders in their Secondary Communication channels that these applications are not to be used for communicating Business Records.  They are also combining their policies on ephemeral messaging with policies on the use of personal applications for company business, which we have written on separately here.

Again, although companies may find these general principles helpful, businesses must develop messaging policies that are tailored to their own practices and risks.  We will continue to monitor developments in the regulation of ephemeral and personal messaging for businesses here and at the Davis Polk Cyber Portal, which is available to help clients meet their evolving cybersecurity and privacy obligations.

This article has also been posted at the Compliance & Enforcement blog sponsored by NYU Law’s Program on Corporate Compliance and Enforcement.

View as PDF

We have written here before about the challenges and benefits of getting rid of old data.  As we have noted, in light of recent legal, regulatory, and technological developments, companies should reevaluate their long-term data management planning.  Last week, the New York Department of Financial Services (“NYDFS”) issued a reminder that by September 4, 2018, covered entities must have a policy for disposing nonpublic information that is no longer necessary for business operations or for other legitimate business purposes, unless required to be retained by law or regulation.  GDPR also requires companies to minimize the amount of personal data that they store to what is necessary.  At the same time, the case law that has developed under the new Federal Rules of Civil Procedure on spoliation has significantly reduced the risk of sanctions resulting from accidental deletion of electronic materials that might be relevant to a litigation or investigation.  But despite these developments, companies operating in the U.S. still have little guidance on how to balance the costs and risks of deleting large volumes of data with the long-term costs and risks of keeping it.

For this reason, we see the recent release of the Sedona Conference: Principles and Commentary on Defensible Disposition, as a watershed moment for data minimization in the United States.  The Sedona Conference is one of the nation’s premier non-partisan, non-profit law-and-policy think tanks, whose publications have been relied upon as authoritative by courts when faced with novel data issues.  The Sedona Paper begins with the core principle acknowledged in Sedona’s 2014 Commentary on Information GovernanceThe effective, timely, and consistent disposal of physical and electronic information that no longer needs to be retained should be a core component of any Information Governance program.

The Paper builds on this statement with the following three new principles:

PRINCIPLE 1.  Absent a legal retention or preservation obligation, organizations may dispose of their information.

PRINCIPLE 2.  When designing and implementing an information disposition program, organizations should identify and manage the risks of over-retention.

PRINCIPLE 3.  Disposition should be based on Information Governance policies that reflect and harmonize with an organization’s information, technological capabilities, and objectives.

In the guidance and commentary accompanying these principles, the Paper makes several compelling arguments for data minimization, many of which echo similar arguments that we’ve made here over the last year:

  • When considering whether to implement a data minimization program, and the scope of any such program, companies should give serious consideration to the long-term costs and risks of keeping data, including:
    • the projected overall growth in the size of the company’s data over the next 5-10 years, and the associated storage costs
    • lost productivity associated with searching large volumes of irrelevant data
    • the cybersecurity and privacy risks of having large volumes of unneeded data, especially considering GDPR-type rights of erasure
    • internal audit and compliance risks
    • contractual risks (e.g., obligations to clients and customers to delete data once it is no longer needed)
    • potential, but not yet reasonably anticipated, litigation or regulatory inquiries.
  • If there is no legal retention obligation, information should be disposed as soon as the cost and risk of retaining the information outweighs any likely business value of retaining the information.
  • Typically, as information ages, its business value decreases, and the cost and risk of keeping it increases.
  • Absent a legal obligation to retain certain documents, companies may dispose of those documents, even if an obligation to keep those documents arises at some point in the future.
  • Data minimization programs that target narrow categories of documents or a small group of custodians carry greater risk than programs that are generally applicable.
  • Data minimizations programs that are not enforced broadly lead to selective disposal and thereby increased risk.
  • Regular data minimization programs may need to be suspended due to legal hold requirements, but those programs should ensure that routine disposal of documents resumes promptly when the legal hold requirements are lifted.
  • For heavily regulated industries, time-based data minimization programs that cover periods beyond any statutory document retention obligation can be prudent.
  • Getting rid of old irrelevant data makes future litigation more efficient by:
    • reducing the time and effort required to identify potentially relevant information,
    • reducing the cost of searching and analyzing large, and often outdated, data sources,
    • reducing the cost of implementing and monitoring document preservation obligations,
    • reducing the number of documents to be collected, processed and reviewed,
    • reducing the risk that relevant documents will be lost or missed in a sea of irrelevant documents.

The Paper is subject to public comment, but it provides a helpful roadmap for a sensible and effective data disposal program.  Still, implementation remains tricky.  Companies face a web of overlapping local, federal, and international document preservation obligations, along with their legal hold obligations associated with lawsuits and regulatory inquiries.  No company is going to pay someone to actually review millions of old documents to separate the ones that need to be preserved from the ones that can be deleted.

But that separation can be done efficiently, and in a cost-effective manner, through careful planning and the utilization of advanced data management software and data analytics.  These issues, along with a step-by-step approach to responsible document deletion, are discussed further in the below webcast.

WEBCAST SLIDES

CLE FORM

Cases and Regulations

For years, the default setting at many companies was to keep electronic data indefinitely. Storage is cheap, there are legal risks associated with deleting data, and you never know when an email from 10 years ago is going to become important. Some companies have document management policies, but often they are not rigorously enforced or they are suspended whenever litigation arises. The result is that most companies have enormous amounts of old data and are generating significant amounts of additional data every day. As the cybersecurity and data privacy risks associated with having large volumes of extraneous data increase, regulators have started to require companies to get rid of data that they don’t need for business, regulatory or legal reasons.  Here are some recent examples:

  • NYDFS – Starting on September 1, 2018, companies regulated by the New York Department of Financial Services’ cybersecurity rules are required to have a data minimization program that includes “policies and procedures for the secure disposal on a periodic basis of any Nonpublic Information…that is no longer necessary for business operations or for other legitimate business purposes… except where such information is otherwise to be retained by law or regulation, or where targeted disposal is not reasonably feasible due to the manner in which the information is maintained.”
  • GDPR – The EU’s new General Data Protection Regulations, which came into effect on May 25, 2018, requires the limitation of personal data to “what is necessary in relation to the purposes for which [such data] are processed.”
  • US State Laws – The newly enacted South Carolina Insurance Data Security Act, which is based on the model insurance law requires covered entities to “define and periodically reevaluate a schedule for retention of nonpublic information and a mechanism for its destruction when no longer needed.”  The Act becomes effective on January 1, 2019.  In addition, the New York Attorney General recently released cybersecurity guidance for small businesses, which states, “Hackers can’t steal sensitive information if it’s not there.  To limit the risks from an attack, delete customer or employee information files you no longer need.”

Although regulators are requiring data minimization programs, implementation remains tricky. Assuming that no one is going to actually review all of the thousands or millions of documents that are to be deleted, sorting documents that must be preserved for legal or regulatory purposes from those that can safely be deleted requires careful planning in order to be effective and not an enormous drain on resources. As discussed in our recent webcast on Cybersecurity and Data Management, recent cases under the Federal Rules of Civil Procedure on spoliation significantly reduce the risk of sanctions resulting from the accidental deletion of electronic materials that might be relevant to litigation. In addition, advances in data analytics and machine learning are creating opportunities for companies to responsibly delete large volumes of old data, without having to review each document to determine if it must be retained for litigation purposes or for some regulatory obligation. These issues, along with a step-by-step approach to responsible document deletion, are also discussed in the below webcast.

The Davis Polk Cyber Portal is now available to assist our clients in their efforts to maintain compliance with their cybersecurity regulatory obligations. If you have questions about the Portal, please contact cyberportal@davispolk.com.

The author gratefully acknowledges the assistance of Law Clerk Daniela Dekhtyar-McCarthy in preparing this entry

In January 2018, at the Eleventh Annual International Conference on Computers, Privacy and Data Protection (the “Conference”) in Brussels, one panel that made some headlines centered around blockchain technology in the context of data protection. The core inquiry of the panel was two-fold: (1) whether blockchain technology can facilitate data protection regulatory objectives and (2) whether the same technology makes it more difficult to enforce data protection laws. Unsurprisingly, neither inquiry produces a clear-cut answer.

On the one hand, blockchain technology could potentially advance the “privacy-by-design-and-default” principle promulgated by the E.U.’s General Data Protection Regulation (“GDPR”), which comes into force on May 25, 2018. But on the other hand, some of the technology’s signature features (i.e., immutability and irreversibility) raise concerns related to the dual principles of (1) data minimization and (2) the right to be forgotten, which underpin those same regulations. The inquiry is further muddied by the fact that (1) this discussion speculates about the compliance potential of a distributed technology in light of regulations that are designed with centralization in mind, and (2) not all blockchains are created equal—in fact, while they can be grouped into broad categories (for example, public vs. private), the analysis must always be done on a case-by-case basis.

Explaining Blockchain in Data Protection Jargon. As a preliminary matter, let’s explain why blockchain technology would even fall within the purview of data protection regulations. Since GDPR is currently considered the gold-standard in this realm, we will look to it for key metrics of analysis. Under the GDPR, data protection rules apply only if an entity processes identified or identifiable personal data—that is, data relating to a living natural person. Article 29 Working Party explained in its Opinion 05/2014 (WP 2016) that anonymized data (i.e., that which irreversibly prevents identification) is not subject to data protection rules—not pseudonymized data. In a public blockchain environment, every transaction carried out by a particular user is linked to the same encrypted public key. In blockchains where the public key is published, however, the same unreadable hash links the transactions to a particular user, and IP addresses or other metadata could make the user identifiable, thus putting these blockchains within the scope of GDPR. On the other hand, a blockchain such as Hyperledger, one implementation of which is designed to track products or materials in supply chain would not fall within the regulatory scope because there is no concern related to personal data.

Self-sovereign Identity as the Ultimate Solution for Privacy-by-design? One central tenet of the GDPR is the principle of privacy-by-design, whereby systems are set-up to promote privacy and data protection compliance objectives from the start. Blockchain technology was designed, in this sense, to ensure data integrity by being resistant against data corruption. It was also designed to be breach-proof, by moving from a centralized database model with a single point of failure to a distributed scheme. As one Conference panelist noted, blockchain technology also enables new forms of information sharing whereby parties to a transaction do not need to reveal any more information about themselves than is absolutely necessary for that particular transaction. For instance, in the context of credential management, individuals can disclose personal data to a trusted authority who would be responsible for issuing attestations of particular attributes (e.g., citizenship, age, address), without the need to have the underlying personal data being transferred every time. This could help comply with or take a particular transaction outside the scope of GDPR’s strict cross-border data transfer rules (see GDPR Chapter V and recitals 6, 48, 101-103, 107, 110-115). As for other opportunities, another panelist noted that blockchain technology presents unique possibilities for GDPR compliance in the areas of (1) notarization of consent, (2) notification of usage of personal data, and (3) real time information sharing between a data controller and data processors. Taking this one step further, yet another panelist envisions a future where self-sovereign identity enabled by blockchain technology is the only way to be GDPR-compliant.

Can We Forget Immutable Data? The very potential of blockchain technology for ensuring data integrity—by being immutable and non-selective in its preservation—also poses challenges for compliance with key data protection principles. By capturing every transaction and making it publicly visible, the technology inevitably runs afoul of the principle of data minimization enshrined in GDPR Article 5. Because the information cannot be removed once it is recorded, blockchain technology also conflicts with the storage limitation principle. Moreover, Article 17 of the GDPR recognizes a right to be forgotten, or a right to erasure, as some call it. Under this principle, an individual is empowered to request the removal of personal data if it is no longer necessary in light of the original purpose for collection and processing, the data subject withdraws consent, and certain other requirements are met. At the end of the day, whether blockchain technology fundamentally conflicts with the right to be forgotten depends on what “erasure” means, and whether irreversible encryption, revocation of access rights (in smart contracts contexts), or other similar mechanisms could suffice.

Can Distributed Technology Thrive in the Age of Centralized Regulatory Scheme? As Deloitte recently observed, in light of the pressure to prepare for GDPR compliance, stakeholders have increasingly engaged in research to make blockchain mechanisms editable, and prototypes have already been developed in response to the needs of large financial institutions. The irony is apparent, at least with the current proposed prototypes: to maintain the immutability premise of the technology all while complying with data protection rules requires the authority to alter information on the chain to be conferred to a “trusted administrator.” In other words, short of having to rely on the consent of a majority of the nodes on the chain to create a new fork, in order for the distributed ledger to comply with the GDPR, the technology has to be reconfigured with a centralized patch. Does this mean that the GDPR is not as technology-neutral or agnostic as some might claim? Designed with notions of a centralized data governance model (i.e., cloud computing and data controller) and with ill-fitting applications for blockchain technology, query whether aspects of the GDPR have already become outdated before the Regulation enters into force on May 25, 2018.

Key Takeaways

  • Blockchain technology’s pseudonymization of personal data approach could bring it within the scope of data protection obligations under the GDPR and other similar regulations.
  • The technology’s immutable and distributed features present opportunities to advance the notion of privacy-by-design-and-by-default. These features could also be leveraged to circumvent the need to transfer personal data for purposes of authentication via a credential-granting mechanism, and make data protection rules inapplicable.
  • The same features, however, could also create challenges for the right to be forgotten and data minimization principles under the GDPR.
  •  Stakeholders have identified a regulatory-compliant fix in centralizing the authority to edit information on certain blockchains. This and similar approaches could be perceived as threats to the core identity of the technology and begs the question of whether the GDPR and other similar data protection schemes are fundamentally incompatible with a decentralized technology like the blockchain.

In our cybersecurity and data management webcast now available below, Davis Polk partners Avi Gesser, Gabe Rosenberg, and associate Matt Kelly, recently discussed getting rid of old documents to reduce cyber risk.

To avoid ending up in the news as the latest victim of a cyber-attack, companies are looking to improve their data security.  One way is data reduction─getting rid of old data that you don’t need for business purposes and you are not legally required to keep.  The less data you have, the easier it is to protect.

New guidance from the FTC, and recent regulations by the NYDFS, make the express connection between data minimization and cybersecurity.  The NYDFS cybersecurity rules provide, that by September 1, 2018, covered entities must have “policies and procedures for the secure disposal on a periodic basis of any Nonpublic Information  . . . that is no longer necessary for business operations or for other legitimate business purposes of the Covered Entity, except where such information is otherwise required to be retained by law or regulation . . .”

The case law that has developed under the new Federal Rules of Civil Procedure on spoliation has reduced the risk of sanctions resulting from accidental deletion of electronic materials that might be relevant to a litigation.  But taking millions of electronic documents and sorting those that need to be kept for legal reasons from those that can be deleted has, until recently, been so costly and complicated that few companies have even tried.

However, recent advances in data analytics and machine learning are creating opportunities for companies to responsibly delete large volumes of old data, without having to review each document to make sure it is not subject to a legal hold.

These tricky issues, along with a step-by-step approach to responsible document deletion, are discussed in the recent webcast below.

Webcast Slides | CLE Form | Cases and Regulations

The Cybersecurity Law Report recently published an article by Davis Polk titled Lessons from Equifax on How to Mitigate Post-Breach Legal Liability.  The article analyzes the July 2019 settlement between Equifax and the Federal Trade Commission, Consumer Financial Protection Bureau, and 50 state and territorial attorneys general and uses lessons from the Equifax breach to examine the categories of legal liability that companies may face following successful cyber attacks.  The article also examines 10 steps companies are taking to mitigate their post-breach legal liability and to implement the kinds of requirements set forth in the Equifax settlement.

The article is reproduced in full below.  A PDF of the article is available here and at The Cybersecurity Law Report.


On July 22, 2019, the Federal Trade Commission (FTC), the Consumer Financial Protection Bureau (CFPB) and 50 state and territorial attorneys general settled their claims against Equifax Inc. related to a massive 2017 breach of Equifax data. That settlement also resolves hundreds of civil consumer-fraud class actions brought against Equifax, but it does not address a securities-fraud class action that Equifax’s shareholders brought against the company in the wake of the breach, which could still result in significant recovery for Equifax shareholders.

The Equifax settlement and the progress of the securities-fraud class action are instructive as to how civil and regulatory liability will play out for companies imperiled by large cyber events. Aside from loss of consumer and employee confidence, reputational damage and other losses resulting directly from a successful cyber attack, there are three large buckets of legal liability that companies face: (1) federal and state regulators, (2) classes of consumers and (3) classes of shareholders (for public companies).

See also “Reducing Risk in the Dawn of Equifax and Other Cyber-Related Securities Fraud Class Actions” (Feb. 13, 2019).

Regulatory Liability

The SEC, FTC, CFTC and state attorneys general all have a potential role in imposing civil penalties or other forms of liability in the aftermath of a cyber event.

SEC

The SEC can bring actions against public companies for failing to disclose in their quarterly filings that a material breach has occurred or providing materially misleading statements about a company’s cybersecurity policies. The SEC pursued this course when Yahoo! Inc. failed to disclose a data breach for over two years, resulting in Yahoo!’s agreement to pay a $35‑million civil penalty.

The SEC can also bring actions against certain regulated entities for failure to take reasonable steps to secure customers’ personal information, including actions to enforce the “Safeguards Rule,” 17 C.F.R. § 248.30(a), which requires registered broker-dealers and investment advisers to adopt written policies and procedures for protecting customer data. The SEC extracted a $1‑million penalty from Voya Financial Advisors Inc. after the company suffered a cyber intrusion due to, among other things, Voya’s failure to adopt reasonable written policies that complied with the Safeguards Rule.

See “SEC Risk Alert Highlights Policy Design and Implementation Failures and Roadmaps Future Enforcement” (Apr. 24, 2019).

FTC

The FTC, CFTC and state attorneys general can also pursue claims against companies that fail to take reasonable steps to protect customer data. For instance, Section 5 of the FTC Act prohibits certain unfair or deceptive commercial acts or practices, and the FTC has used this authority – as it did in Equifax’s case – to impose liability on companies both for failing to adopt adequate security measures and for misrepresenting (or failing to disclose) weaknesses in those security measures.

See CSLR’s three-part series on lessons from the FTC’s 2018 Privacy and Data Security Update: “Enforcement Takeaways” (Apr. 24, 2019); “Financial Privacy, COPPA and International Enforcement” (May 1, 2019); and “Hearings, Reports and 2019 Predictions” (May 8, 2019).

CFTC

Similarly, the Commodity Exchange Act gives the CFTC the ability to bring enforcement actions for fraudulent or manipulative conduct in condition with interstate commodities markets.

State AGs

Relying on breach-notification laws enacted in all 50 states and the District of Columbia, state attorneys general may additionally bring claims against companies that fail to provide sufficient notice of breaches to consumers or directly to the attorney general’s office. Many of those state statutes also require companies to take reasonable measures to protect personal information and allow the state attorney general to bring actions for violations.

See “The Growing Role of State AGs in Privacy Enforcement” (Nov. 28, 2018).

Consumer Class Actions

Following a major data breach, consumers often file class actions against the breached company, typically bringing claims for negligence and violation of state consumer protection laws (as well as, in some cases, claims for unjust enrichment, breach of contract/implied contract, or negligent misrepresentation). Immediately after the Equifax breach was announced, consumers filed complaints against the company alleging that it had willfully, recklessly or negligently failed to maintain adequate technological and cybersecurity safeguards to protect users’ data from unauthorized access. Yahoo!, Target and Home Depot were subject to similar suits after data breaches exposed personal information held by those entities.

Shareholder Class Actions

For public companies, shareholders may bring actions to recover losses in the value of their shares following disclosure of a breach. These actions often depend on attributing the stock price decline to a company’s fraudulent statements touting the quality of its cybersecurity programs – statements that, in the wake of a breach, were arguably revealed to be false or misleading. In addition to the Equifax litigation, Yahoo!PayPalChegg and Marriott have all faced securities fraud class actions following breaches at those companies.

Shareholders may also bring derivative cases in the name of the company against the directors for mismanagement in failing to prevent cyber events or adopt adequate safeguards for mitigating and responding to them. For instance, following the Yahoo! breach, plaintiffs in shareholder derivative suits alleged that the company’s officers and directors had failed to protect users’ data, notify users of the breach and remediate the breaches – even as they sold some of their own Yahoo! shares. Those suits resulted in a $29‑million settlement, the first significant recovery in a cyber-related derivative lawsuit following disappointing outcomes for plaintiffs’ attorneys in cyber-related derivative suits brought against Wyndham and Home Depot.

Which of these three buckets of legal liability will end up posing the most serious threat to companies that have experienced a large data breach remains unclear, but all three are coming quickly in the wake of public breach disclosures.

Just a week after the Equifax settlement, Capital One announced a breach that affected approximately 106 million credit card applicants – including 140,000 customers whose Social Security numbers were stolen and 80,000 customers whose bank account numbers were compromised. Capital One already faces at least three consumer class action lawsuits arising from the breach and several state regulatory inquiries. Securities class actions will likely follow, considering that Capital One’s stock price dropped significantly on the day the breach was announced, erasing over $1 billion in market capitalization. Indeed, plaintiffs’ law firms are actively recruiting investors in Capital One to serve as lead plaintiffs in future securities lawsuits based on the decline in Capital One’s share price following disclosure of the breach.

Equifax’s Regulatory Settlement

Federal and state regulators are under increasing pressure to impose meaningful penalties on companies that have experienced data breaches and have not implemented adequate data safeguards. The regulatory portion of the Equifax settlement included $275 million in civil penalties imposed by the Consumer Financial Protection Bureau and state/territorial attorneys general, or approximately $1.87 on a per-consumer basis.

See “Learning From the Equifax Settlement” (Jul. 31, 2019).

Equifax’s Consumer Class Action Settlement

To settle the consumer class actions and regulatory claims against it, Equifax agreed to pay $380.5 million (and potentially up to $505.5 million) into a fund that will, among other things, cover credit-monitoring services for affected consumers and compensate them for out-of-pocket expenses “fairly traceable” to the breach. With the personal information of as many as 147 million people affected by the breach, the total amount of the fund – even if fully funded with $505.5 million – corresponds to about $3.44 per consumer. Critics described the settlement figure as “grievously low,” “too little, too late” and insufficient to actually deter misconduct or negligence by companies susceptible to data breaches.

Reasons for Small Settlements

Small settlements in consumer cases are largely due to the high bars for recovery. Plaintiffs typically must prove that they suffered actual harm as a result of the data breach. But it is often difficult to know whether a particular consumer’s data has been accessed or used, and even if that could be established, damages are hard to quantify if credit companies promise to make consumers whole for any losses and provide free credit and identity theft monitoring. Indeed, many large-scale consumer class actions arising from data breaches have been dismissed on the grounds that plaintiffs cannot even establish standing to bring the case because they cannot show that they have suffered any harm. Several courts have found that, where consumers could not point to specific, concrete injuries (such as fraudulent charges) resulting from a data breach, their injuries were hypothetical future harms insufficient to confer standing.

See also “The New Normal: Easier Data Breach Standing Is Here to Stay” (Feb. 6, 2019).

CCPA’s Attempt to Address Deficient Monetary Recoveries

California has attempted to address the deficiencies and incentive structures that result in low recoveries in cyber-related consumer cases through the California Consumer Privacy Act (CCPA) of 2018. The statute permits certain users whose personal information is subject to unauthorized access to recover between $100 and $750 per breach (or actual damages, whichever is higher) if the breach results from a “business’s violation of the duty to implement and maintain reasonable security procedures and practices appropriate to the nature of the information to protect the personal information.” This statute – and comparable laws being proposed in other states – creates the possibility for more potent consumer class actions going forward. In CCPA private suits, it may be difficult for businesses that experience a breach to prove that their security procedures were adequate, and the cases may involve extensive and perhaps embarrassing discovery into a company’s cybersecurity practices. These factors, combined with the threat of significant statutory fines, may create more serious civil exposure for companies that have experienced large breaches.

See CSLR’s two-part series on CCPA priorities: “Turning Legislation Prep Into a Program Shift” (Jun. 5, 2019); “Tackling Data Subject Rights Requests and Vendors” (Jun. 12, 2019); and its two-part series on preparing for the CCPA: “Securing Buy-In and Setting the Scope” (Feb. 27, 2019); and “Best Practices and Understanding Enforcement” (Mar. 6, 2019).

Equifax’s Securities Fraud Litigation

The securities-fraud suit pending against Equifax presents another avenue for significant liability. In securities class actions (unlike consumer class actions), the plaintiffs are the company’s shareholders, and the measure of damages is the loss in value of the company’s stock resulting from its alleged misrepresentations or omissions. In Equifax’s case, the date when the breach was announced saw Equifax’s stock price close at $142.72. Eight days later, that value had declined to $92.98, a decrease of approximately 36 percent. Three months after the announcement, it had climbed back only to $116.83, reflecting an approximately $3‑billion loss in market capitalization from the date when the breach became public.

Over the past few months, the securities-fraud case against Equifax has moved steadily forward. Plaintiffs filed suit on September 8, 2017, just one day after Equifax announced the breach. The court denied the defendants’ motion to dismiss in relevant part on January 28, 2019, permitting the plaintiffs’ claims against Equifax to proceed. The court held that plaintiffs had sufficiently alleged that Equifax made misleading statements about the quality of its cybersecurity protections and its compliance with data protection laws. Although the federal securities laws pose a heightened bar for pleading scienter, the court concluded that plaintiffs had cleared this bar by pointing to evidence that Equifax knew – based on pre-breach audit reports, investigations and warnings from employees – about the inadequacy of its security systems at the time it made statements touting them. Finally, the court concluded that plaintiffs had adequately alleged loss causation by pointing to a potential causal connection between revelations about Equifax’s cybersecurity failings and the decline in its stock price.

In July 2019, the court denied Equifax’s request to file an interlocutory appeal of the order on the motion to dismiss. Plaintiffs have also filed a motion for class certification, which defendants have opposed – in part on the ground that plaintiffs’ damages methodology will overcompensate the plaintiff class by permitting them to recover for stock declines that resulted from the fact of the breach itself (as opposed to any alleged misrepresentations or omissions about Equifax’s data security). The motion remains pending, and discovery is proceeding in the case.

Mitigating Cybersecurity Risk

The Equifax settlement and the pending class action securities case provide several important data points for companies trying to assess their cyber risks and how best to reduce those risks. Companies should review the settlement, as well as the measures imposed on other companies as part of cybersecurity resolutions, and see (1) how they compare, (2) whether there is significant risk that their cybersecurity will be viewed as inadequate or their statements about their cybersecurity will be viewed as inaccurate and (3) what steps they can take to reduce such risks.

Guidance on Achieving Reasonable Security From Equifax

The Equifax settlement provides insight into what regulators view as reasonable cybersecurity measures. As such, it provides some guidance for companies on how to (1) establish reasonable cybersecurity techniques to reduce the risks of civil and regulatory liability and (2) avoid regulatory and shareholder civil risk arising from public claims that the company’s cybersecurity is “reasonable,” “effective” or reflects “best practices,” if such statements do not match how courts or regulators would view the company’s data protection measures.

The settlement requires Equifax to:

  • identify an employee who will be responsible for the company’s information security initiative;
  • annually review internal and external security risks and implement any measures necessary to mitigate or eliminate them;
  • evaluate and test the efficacy of its security measures;
  • adopt (and enforce) written policies or guidelines aimed at implementing an enhanced information security program;
  • offer regular training programs on cybersecurity issues, including at least annual training on security awareness for all employees;
  • keep the board of directors (or a relevant subcommittee) updated about the company’s information security program; and
  • ensure that third parties with access to Equifax data are employing sufficient cybersecurity measures.

Top Ten Data Protection Measures

Ten examples of specific steps companies are taking to implement the kinds of requirements set forth in the Equifax settlement and reduce their cyber risk include the following:

  1. mapping where personal information and sensitive data are collected and stored in the company, and knowing what is connected to the network;
  2. encrypting sensitive data on the network and on portable devices such as laptops;
  3. implementing multi-factor authentication for remote logins to their networks, and discontinuing access through webmail programs;
  4. granting employees access only to the parts of the network that they need to do their work;
  5. limiting the number of individuals with administrative computer privileges, as well as the length of time privileged access is granted;
  6. ensuring prompt adoption of software patches and updates;
  7. conducting regular penetration testing and vulnerability assessments;
  8. monitoring computer networks for suspicious behavior and unauthorized activity by employees; and
  9. Maintaining an updated incident response plan, and conducting annual tabletop exercises to the plan; and
  10. Having a data minimization policy that allows for the identification and deletion of old sensitive data that is no longer needed for business, legal or regulatory purposes.

We have previously written about legal risks companies will face from the California Consumer Privacy Act (CCPA) when it goes into effect on January 1, 2020.  In short, companies can be subject to consumer class actions alleging statutory damages for mishandled data—and a key defense to those suits will be evidence of reasonable security policies and procedures.

The CCPA is just one example of minimum cybersecurity standards being imposed on companies.  Another important example is New York’s Stop Hacks and Improve Electronic Data Security (SHIELD) Act, which will begin requiring substantive cybersecurity compliance on March 21, 2020.

Although the CCPA has received much more attention from companies, the SHIELD Act is also worth careful consideration because: (1) it applies to thousands of companies worldwide; (2) it imposes substantial new cybersecurity obligations on regulated organizations; (3) it aligns its standards for an effective cybersecurity program to prescriptive requirements under the Gramm-Leach-Bliley Act (GLBA) Safeguards Rule and New York Department of Financial Services (NYDFS) cybersecurity requirements; and (4) it includes interesting new breach notification obligations and remedy requirements.

The Broad Reach of the SHIELD Act

New York’s existing cybersecurity and data privacy laws, N.Y. Gen. Bus. Law § 899-aa, and the NYDFS cybersecurity requirements at 23 NYCRR 500, apply only to organizations doing business in New York and those regulated by the NYDFS, respectively.  By contrast, the SHIELD Act applies to any business that owns or licenses “private information” of New York residents in electronic form, regardless of whether the business otherwise operates in New York State.  As a result, all companies should investigate the sources of the data they collect to assess whether the SHIELD ACT will apply to them.

Moreover, the definition of “private information” has expanded from the requirements under New York’s existing data breach notification statute, to include identifying information in conjunction with account numbers usable to access financial accounts or biometric data, as well as a user-name or email address in combination with a password or security question and answer that would permit access to an online account.

New Data Security Protection Requirements

The key evolution of the SHIELD Act is the requirement it imposes on regulated companies to implement specific data security protections.  The substantive cybersecurity requirements of the Act go beyond merely requiring “reasonable security measures” like the laws of many states, including Illinois.  And while New York is not the first state to impose prescriptive data security requirements, New York’s size and business importance make the requirements of the SHIELD Act harder to ignore than those imposed by some other states.

New York’s new requirements will seem familiar to entities regulated by the NYDFS.  These obligations also line up with the FTC’s proposed revisions to the GLBA Safeguards Rule.  But unlike those laws, which only apply to specific financial services companies, as noted above, the SHIELD Act applies broadly, bringing data security regulations for many companies in accord with those imposed on the financial sector.

Specifically, the SHIELD Act requires companies to implement significant administrative, technical, and physical safeguards:

  • Administrative Safeguards:
    • the designation of one or more employees to coordinate the security program;
    • identification of reasonably foreseeable internal and external risks;
    • assessment of the sufficiency of safeguards in place to control the identified risks;
    • training and managing employees in the security program practices and procedures;
    • the selection of service providers capable of maintaining appropriate safeguards, and requiring those safeguards by contract; and
    • adjusting the security program in light of business changes or new circumstances.
  • Technical Safeguards:
    • assessing risks in network and software design;
    • assessing risks in information processing, transmission, and storage;
    • detecting, preventing and responding to attacks or system failures; and
    • regularly testing and monitoring the effectiveness of key controls, systems, and procedures.
  • Physical Safeguards:
    • assessing risks of information storage and disposal;
    • detecting, preventing, and responding to intrusions;
    • protecting against unauthorized access to or use of Private Information during or after the collection, transportation, and destruction or disposal of the information; and
    • disposing of private information within a reasonable amount of time after it is no longer needed for business purposes by erasing electronic media so that the information cannot be read or reconstructed.

For companies looking for assistance in implementing these requirements, see our previous blog posts on vendor diligence; data minimization and disposal (1, 2, 3, and 4); and access controls (1 and 2).

Note that small businesses that have fewer than 50 employees, under $3 million in gross revenue, or less than $5 million in assets need only implement reasonable administrative, technical, and physical safeguards appropriate for the size and complexity of the small business, the nature and scope of the small business’s activities, and the sensitivity of the information collected by the business.

Safe Harbor Compliance

Companies that are already in compliance with certain existing state or federal data security laws that govern their data—such as GLBA Safeguards Rule, HIPAA, or the NYDFS cybersecurity requirements—are considered compliant.

Additional Reforms: Breach Notification and Remedies

The SHIELD Act also introduces reforms to New York’s data breach notification requirements and remedies for breach.

The Act expands the definition of a breach that requires notification to include unauthorized access to Private Information, so mere viewing of the data may trigger a requirement to notify under the SHIELD Act.  But the Act also narrows companies’ breach notification obligations by imposing a harm requirement.  It is interesting to note that the kinds of harm that trigger a notification obligation include emotional harm, although the Act doesn’t provide guidance on the circumstances where emotional harm could be found.

Regarding remedies, the SHIELD Act expressly does not create a private right of action.  But the substantive security requirements are likely to be relevant when litigating increasingly common negligence claims in connection with a data breach that has resulted in harm to individuals.  The Act also allows the New York Attorney General to seek civil penalties (up to $5,000 per violation, with no cap) for knowing or reckless failures to comply with the new data security standards, although what constitutes a single “violation” is not made clear in the Act.

Conclusion

With federal privacy and cybersecurity legislation apparently stalled in Congress, the SHIELD Act is the latest example of states filling the void and enacting laws that create significant new requirements on businesses, including substantive data security obligations and expanded breach notification requirements.

Close attention should also be paid to the proposed New York Privacy Act, which did not pass in the last session but is expected to be reintroduced in the next legislative term.  The Privacy Act would create fiduciary duties for companies that store personal information and create a private right of action akin to the CCPA.

We will be closely watching further developments in state cyber/privacy laws here at the Davis Polk Cyber Blog, and the Davis Polk Cyber Portal is available to assist our clients in assessing and complying with these regulatory obligations.

On June 6, 2018, the Eleventh Circuit vacated a cease and desist order issued by the FTC against LabMD as unenforceably vague.  The FTC’s Order, which resulted from a finding that LabMD had failed to maintain an adequate cybersecurity program, directed LabMD to “establish and implement, and thereafter maintain, a comprehensive information security program that is reasonably designed to protect the security, confidentiality, and integrity of personal information collected from or about consumers. . . .”  In short, it required LabMD to raise the standard of its cybersecurity program without specifying the means to achieve that requirement.  The Eleventh Circuit held that the Order was so broad that it could not be enforced or administrated, and failed to include “any meaningful standard informing the court of what constitutes a ‘reasonably designed’ data-security program.”

The Eleventh Circuit’s LabMD decision highlights the ongoing debate over whether cybersecurity regulations should be standards-based or rules-based.  The standards-based approach favors broad, flexible requirements that mandate that a company establish a “reasonable” or “industry standard” cybersecurity program, without indicating how.  In addition to the FTC, many cybersecurity regulators have adopted a primarily standards-based approach.

California state law, for example, requires businesses to “implement and maintain reasonable security procedures and practices . . . to protect the personal information from unauthorized access, destruction, use, modification, or disclosure.”  Similar language appears in the cybersecurity laws in Illinois, Colorado and Louisiana.  In Europe, GDPR requires organizations to establish a cybersecurity program that addresses six principles for the processing of personal data, including “lawfulness, fairness, and transparency,” “purpose limitation,” “data minimisation,” “accuracy,” “storage limitation,” and “integrity and confidentiality.”

Proponents of the standards-based approach argue that having general and flexible requirements is the right model for cybersecurity regulations because they must be applied to companies that operate on different scales, and with very different types of data, resources, and risk profiles, which makes a one-size-fits-all approach impractical.  They also argue that flexible standards are necessary because cyber threats and technology are constantly changing, so specific measures that may be adequate one month may be insufficient the next.

In its LabMD decision, the Eleventh Circuit lent support to proponents of the rules-based approach to cybersecurity regulation, which favors concrete measures that a company must take to be deemed compliant, largely without regard to its particular characteristics.  Rather than requiring companies to meet current industry standards or best practices, rules-based cyber regulation creates them.

The standard-bearer for the rules-based approach to cybersecurity is the New York Department of Financial Services (NYDFS), which imposes significant, detailed responsibilities on covered entities, including:

  • The element of its written incident response plan
  • Limitation on access privileges
  • Regular training
  • Penetration testing and vulnerability assessments
  • Multi-factor authentication
  • Application security
  • Encryption
  • Monitoring activity of authorized users
  • Data minimization
  • Vendor management

Similarly, the National Association of Insurance Commissioners have issued an Insurance Data Security Model Law that includes specific measures akin to the NYDFS rules.  That law has already been adopted by South Carolina, with other states considering similar measures.  Massachusetts has also long mandated specific elements for information security programs, including secure user authentication protocols, encryption, and firewall protection.

The most obvious regulatory advantage of the rules-based approach is some degree of certainty, for both the regulators and the regulated entities, in terms of what is required.  To address some of the concerns that a one-size-fits-all approach places too heavy a burden on small companies, the NYDFS rules allow for exemptions from certain specific requirements for very small companies (fewer than 10 employees) or if, based on a risk assessment, the company implements effective alternative compensating controls that are reviewed and approved by the CISO.

In its first cyber enforcement action, the NYDFS reaffirmed its commitment to its rules-based approach.  On June 27, the NYDFS announced that Equifax had agreed to take corrective action for its 2017 data breach, as set forth in a consent order that involved seven other state banking regulators.  In contrast to the FTC’s LabMD order, the NYDFS Equifax order includes specific requirements, including:

  • The Equifax board must review and approve a written risk assessment that identifies (1) foreseeable threats and vulnerabilities to the confidentiality of personally identifiable information; (2) the likelihood of threats; (3) the potential damage to the company’s business operations; and (4) the safeguards and mitigating controls that address each threat and vulnerability.
  • Equifax must improve standards and controls for supporting the patch management function. An effective patch management program must be implemented to reduce the number of unpatched systems and instances of extended patching time frames.
  • Equifax must enhance oversight of IT operations as it relates to disaster recovery and business continuity function.

The Eleventh Circuit’s decision is likely not a death blow to the FTC’s remedial powers and preferences.  The agency may continue to pursue a standards-based approach by encouraging settlement agreements and tailoring cease and desist orders to accommodate the LabMD decision.  Still, the Eleventh Circuit’s decision and recent actions by the NYDFS and the other state regulators who joined in the Equifax resolution, places another weight on the scale in favor of prescriptive, rules-based cybersecurity regulation.

The Davis Polk Cyber Portal, which is now available to our clients, provides detailed checklists and other resources to help companies comply with both their standards-based and rules-based regulatory obligations.

The authors gratefully acknowledge the assistance of summer associates Catherine Martinez and Jonah Stotsky in preparing this entry.

Both the General Data Protection Regulation (“GDPR”) and the California Consumer Privacy Act (“CCPA”) require companies to respond to customer data access requests.  But how do you know that the person making the request is actually who they say they are?  As we have previously noted on this blog, significant amounts of personal information are publicly available as a result of major data breaches, and that stolen data can be used to make fraudulent access requests.  So, how can a company avoid turning a good-faith effort to comply with its GDPR or CCPA access rights obligations into a privacy violation by unknowingly providing the personal information of customer X to someone pretending to be customer X?  A recent GDPR enforcement action in Germany, as well as guidance from German and California regulators, shows that companies must exercise diligence in making sure that they have properly authenticated the data subject who is making the access request.

The 1&1 Decision

On December 9, 2019, the Federal Commissioner for Data Protection and Freedom of Information (BfDI) fined the telecommunications service provider, 1&1 Telecom GmbH EUR 9,550,000 (USD 10.7 million) for providing personal information to customers who called its call center and provided only a name and a date of birth.  The BfDI considered such an authentication procedure to be in breach of Article 32 of the GDPR, which obliges the controller to “implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk” when processing personal data.  Specifically, the procedure lacked appropriate security against fraudulent access requests.  BfDI deemed the fine necessary because, in its view, the infringing practice posed a risk to the entire customer base, any of whom might make use of the authentication hotline.  1&1 is appealing the decision.

Notably, the BfDI issued a fine against 1&1 without any evidence of any actual harm.  This action, along with an October 30 decision by the German data protection authorities discussed in more detail here, demonstrates the increased enforcement against poor security practices that pose risks to personal data, even in the absence of concrete injury.

Authentication Under GDPR

Under Article 15 of the General Data Protection Regulation (“GDPR”) data subjects have a right to confirmation as to whether or not their personal data are being processed and, if so, the purposes of processing, the categories of personal data concerned, the recipients of the data, the length of data storage, and the sources of the personal data.  The controller must respond to such a request from a data subject without undue delay and in any event within one month of receipt of the request.  And while Recital 64 to the GDPR explains that a controller should use “all reasonable measures” to verify the identity of a data subject, when in doubt, a controller is entitled, and expected, under Article 12.6, to seek additional information to confirm the identity of a data subject requesting access.

Authentication Under CCPA

Pursuant to Cal. Civ. Code § 1798.100(a), a consumer has the right to request that a business that collects the consumer’s personal information disclose the categories and specific pieces of personal information the business has collected about that person.  In addition, the consumer can request disclosure of the sources of the information regarding the consumer, and the categories of third parties with whom the information is shared.  Cal. Civ. Code § 1798.110(a).  A business that receives such a request has 45 days to respond to a verifiable consumer request upon receipt of the request, with the possibility to extend this period once for an additional 45 days, or it risks possible civil penalties for failure to respond.  Cal. Civ. Code § 1798.100(d).

A business is not obligated to provide disclosure or delete the personal information of a consumer if the business cannot verify that the individual making the request is in fact that consumer, or a person duly authorized to act on the consumer’s behalf.  Furthermore, under the proposed regulations released by the California Attorney General on October 10, 2019, if a business cannot verify a consumer making a request to disclose specific pieces of information, the business shall not disclose any specific pieces of information.  These regulations further note that businesses should not disclose certain sensitive pieces of information in any circumstance, including a consumer’s Social Security number or driver’s license number.

Reasonable Authentication Procedures

To address the risk of fraudulent data access requests, regulators on both sides of the Atlantic have offered guidance on designing authentication programs that comply with legitimate access rights and avoid data leaks to fraudulent requesters.  For example, the Bavarian State Commissioner for Data Protection (BayLfD) issued the following guidance in July 2019 on how public bodies should authenticate access requests:

  • Compare the information provided by the data requester against existing contact information.
  • Send personal data to verified return channels, such as a postal address the company has on file for the data subject.
  • If the data subject and the company have previously communicated electronically using a secure authentication means, use the same means for making the access request application.
  • When using a previously unknown email address, confirm the application using an already known address for that data subject.
  • Use an existing piece of authenticating information (e.g., password, customer number, transaction number).
  • Call the data subject at a known number to confirm the identity of the data subject.
  • Depending on the importance of the application, conduct an interview with an in-person identification using an official identification document.

The California Attorney General’s proposed CCPA regulations released on October 10, 2019, offer similar guidance on authentication when complying with access requests:

  • In determining the method by which the business will verify consumer requests, a business should consider:
    • the type, sensitivity, and value of the personal information;
    • the risk of harm to the consumer posed by any unauthorized access or deletion;
    • the likelihood that fraudulent or malicious actors would seek the personal information;
    • whether the personal information to be provided by the consumer to verify their identity is sufficiently robust;
    • the manner in which the business interacts with the consumer; and
    • available technology for verification.
  • If the business maintains a password-protected account with the consumer, the business may use existing authentication practices for the account to verify the consumer’s identity.
  • A business should verify a consumer’s identity to a reasonable degree of certainty—which may include:
    • matching at least two reliable data points—to disclose categories of personal information or delete less sensitive personal information, and to a reasonably high degree of certainty; and
    • matching at least three pieces of personal information and the receipt of a signed verification from the consumer—to disclose specific pieces of personal information or delete more sensitive personal information.
  • Where the business cannot verify the consumer’s identity using personal information already collected from the consumer, the business should delete any new personal information collected for verification purposes as soon as practical after processing the consumer’s request.

Conclusion

With CCPA coming into effect on January 1, 2020, many U.S.-based companies will find themselves flooded with access requests, like those already inundating European businesses.  To comply with the CCPA access request requirements, companies must develop a system agile enough to identify subject data and respond to requests within the statutory time limits, while also taking reasonable steps to authenticate requesters.  Consistent with the recent guidance discussed above, to meet these requirements, companies should leverage existing authentication mechanisms and use more rigorous methods when the requests involve more sensitive data.

This article has also been posted at the Compliance & Enforcement blog sponsored by NYU Law’s Program on Corporate Compliance and Enforcement.