Blog

Rebuilding Consumer Trust through Responsible Data Use

August 31, 2020

Two years after GDPR and with CCPA now in full affect – how is it still possible that there are still so many unanswered questions about how consumer data is used? Nearly 80% of Americans are still concerned about their personal data and only 6% know what is being done with the data collected. Does the future hold a proliferation of CCPA-like regulations in every state? Or is there an opportunity for business leaders to rebuild consumer trust?

This is where the notion of responsible data use comes into play. A simple example? Consider the needs of traffic planners. Historically, planners were limited to costly and time-intensive surveys and physical observation. Today, we can gather much richer and more accurate traffic patterns based mobile phone data. Analyzing home locations to understand popular routes and destinations or even population demographics enables new levels of sophistication in transportation optimization. Most importantly, these use cases only require aggregate information about the people traveling (i.e., time of day preferred by morning commuters). They do not require knowing which specific individuals are on the roadway.

Responsible use means that the there is a narrow and well-crafted goal for the using the data (i.e., road construction), the application of that data achieves that goal (routes taken by demographic segments), and only that goal, and the application does not rely on identifying any specific individual. The standards of responsible use strike the balance of delivering innovation – bringing the power of data science to improving consumer experience and discovering new market opportunities – while keeping trust and transparency paramount.

The Current Approach to Personal Information

Before we get into how we make responsible data use the norm, we need to understand what is currently considered personal information and deserving protection.

Historically, data privacy centered on personally identifiable information (PII) — data that could identify a person or be used to commit fraud (i.e., name, Social Security number, driver’s license number). In recent years, the focus has broadened to Personal Information which includes persistent identifiers such as Advertising ID, IP Address, and cookies as this pseudonymous data reflects an individual’s behavior and may be used to indirectly identify a consumer if combined with more information.

With the rise of 5G and IoT, data interactions will continue to explode and, as a result, we will see continued evolution in consumer preferences for what is considered private (and more “grey areas” for regulatory policies). It thus falls on the companies creating and using this data to enforce standards for the “responsible use of data” and address the question “should we do this?” rather than “can we do this?”

4 Questions That Define Responsible Data Use

Responsible data use is based on the answers to 4 simple questions. It starts with starts ensuring that the problem statement being addressed only requires aggregated and de-identified data.

  1. Is the need for aggregated or individual insight? The platform should require a minimum threshold of records for inclusion (aggregated data). It should also never need to identify any individual specifically. Data that does not meet this standard should be discarded so as not to keep isolated cases.
  2. How has consumer consent been validated? It is not enough to confirm consumer consent was collected by the first-party data source. Organizations need to ensure that the data that they are using was obtained with consumers’ rights and benefits in mind, that consumers know why the data is being collected and they can change their participation status at any time.
  3. Are data inputs privacy-aware? Once validated, all personal or user-level identifiers should be discarded by either the first-party provider or the platform. Privacy-aware refers to using the intelligent outputs of de-identification or aggregation, rather than the original data which can identify a unique individual directly or indirectly. For example, privacy-aware outputs could be an obscured device ID so what the platform sees is a random string – not identifiable to any individual. Platforms that only consume privacy-aware data eliminate human error or maleficent actions.
  4. How transparent is the data policy? Recent research shows the importance of open and friendly communications in conveying how the data provided by consumers clearly matches the purpose for which it is needed. It is similar to changes implemented after the Credit Card Act of 2009 – where simpler language and presentation can make data approach and data lifecycle (data source, types of data interactions, and data retained in the platform) much more accessible.

The Virtuous Cycle of Innovation and Trust

There will still be questions on the nature of de-identification and aggregation (and extreme examples of “extrapolating” individual identities). However, without user-level identifiers, it is nearly impossible. This ongoing consternation reflects the extent that trust has eroded in technology.

Businesses do not need policymakers to define evolving privacy needs. This is the opportunity for companies to rebuild consumer faith. The standards for responsible use of data provide a framework for utilizing the power of data, without compromising fundamental rights of privacy. By pro-actively adopting responsible use standards, we can start to differentiate applications that respect consumer privacy and create a virtuous cycle where consumers gain confidence in the benefits of these applications.