On Ethical Controls: From Personal Data to Household Data to Social Data
By Henry Morris, May 15, 2018
The Facebook situation is heating up with CEO Mark Zuckerberg testifying before House and Senate committees of the U.S. Congress this week. Zuckerberg and COO Sheryl Sandberg have been on multiple TV and press interviews over the past few weeks apologizing for the current situation and committing to do better to protect personal data. In response to the Cambridge Analytica-Facebook mess, it’s worth the effort to go to “Settings” on our Facebook accounts to check on what parameters are set for sharing our personal data and which apps have used personal data. Though, as we learned from the Cambridge Analytica situation, data gathered through apps built and deployed on the Facebook platform can stay around for years outside of Facebook’s control, despite promises from the app developer that the data had been deleted.
The issue of unwanted and unknowing data sharing has been framed in the interviews with Facebook execs as protection of your personal data. But that’s not a completely accurate way to characterize the situation. Given today’s social technology, the issue is sharing data gathered through a social network from your friends and about your friends that should be a concern. After all, only 270,000 Facebook users downloaded the app in question. But up to 87 million Facebook users could have been affected, the company reported, as the reach of the app was broadened to include the members of the social networks of each of the individuals who downloaded and used the app – i.e. their friends. The gathering of social data expands geometrically as the reach of the new social data aggregators and those who leverage popular social platforms grow.
The U.S Federal Trade Commission (FTC) cited Facebook for data sharing violations in 2011. In announcing a settlement with the company, the FTC announced in October, 2011:
The social networking service Facebook has agreed to settle Federal Trade Commission charges that it deceived consumers by telling them they could keep their information on Facebook private, and then repeatedly allowing it to be shared and made public. The proposed settlement requires Facebook to take several steps to make sure it lives up to its promises in the future, including giving consumers clear and prominent notice and obtaining consumers’ express consent before their information is shared beyond the privacy settings they have established.
As we know now, this order did not solve the problem; data that was accessed by developers was not deleted. Facebook is not a traditional data aggregator that collects data on individuals, but a firm that maintains data on individuals and their social networks. That means that my association with my friends, which is data about me, is shared by advertisers who leverage these connections to market to a social network of which I am a member.
Traditional Data Aggregators and the Newer Social Data Aggregators
The role of a data aggregator or data broker is not new. For example, Acxiom, founded in 1969, is a large data broker and provider of database marketing services whose foundational asset is a database of hundreds of millions of consumers. The value of this data asset is not just that there are so many records, but that the records include several thousand data attributes (5000 according to one report) on each and every consumer. Data on individuals and households is collected via public records, surveys, and summary information on retail transactions. The personal and household data drives segmentation of consumers and their households into clusters reflecting socio-economic status. Acxiom defined 70 clusters (called Personicx) to which it can position nearly every consumer in the United States. These groupings are valuable to advertisers seeking to market to consumers of similar means and tastes. Responding to calls for greater transparency in 2013, Acxiom allowed consumers to view the data that Acxiom had collected on them and their households.
The FTC studied the data broker industry in 2014, calling for greater transparency to individuals on the data collected on them, and accountability by the firms to their business practices. Nine data brokers were studied for the 2014 report in 3 categories: marketing, risk, and search. The companies were Acxiom, Corelogic, Datalogix, eBureau, ID Analytics, Intellius, PeekYou, RapLeaf, and RecordedFuture. See the FTC Data Brokers: A Call for Transparency and Accountability.
Note that no social media companies were included in the study, as they were not classified as “data brokers”, though their business model relied on advertisers’ access to data available via their social platform. But one of the data brokers included by the FTC in the 2014 study, RapLeaf, had previously been mentioned in conjunction with Facebook in a situation that foreshadowed the current one. In 2010, the Wall Street Journal reported on how RapLeaf had accessed Facebook data including Facebook user ID’s and linked the data with its own data on consumers to build an expanded consumer database. RapLeaf then sold the data, including Facebook IDs, to political advertisers and others. For the original 2010 article, see: Facebook in Privacy Breach: Top-Ranked Applications Transmit Personal IDs, A Journal Investigation Finds.
For an updated article published during the current controversy, see: Facebook Lax Data Policies Led to Cambridge Analytica Crisis: Social-media giant’s loose policing of app developers went on for years.
By 2014, when the FTC study was produced, sharing of personal data through social media companies was well established, and the app that fed the Cambridge Analytica models had already been deployed. Today, the calls for transparency and accountability are centered on the social media companies as sources of personal data for advertising. The center of gravity in the fight for data privacy and protection has moved from personal and household data to social data – reaching data on my relationships. This shift is critical, as it builds not only on demographic profiles of individuals and households, but also on information on each individual’s and household’s social network.
Ethical Controls for Protecting Social Data
Why does the introduction of social data change the situation in terms of ethical controls for data protection? As I noted in response to one of the comments on my prior post, the issue is not limited to a consumer giving permission for use of his/her personal data. The model for Facebook and other social media companies is that your personal data opens up data on your entire social network, i.e. to all of your friends. This is appealing to advertisers whether selling consumer products or seeking to influence an election by gaining information on my friends – people who are likely to buy and potentially vote like me.
We should recast the issue as protection of social data, not just personal data. This brings on the additional ethical challenge to acquire consent beyond the individual who is the entry point to a network of friends. At a minimum, the default setting should be that my friend list is not shared with the advertiser. And for full transparency, the advertiser’s request would need to be routed to each of my friends individually for their specific opt-in.
Acquiring a friend’s consent would need to be done within a process that does not disclose the friend’s identity while the request is being made. Managing such a highly distributed process points to a highly distributed, decentralized solution for which a blockchain is well suited, as I mentioned in the last post. For more on blockchain and data protection, see the recent Harvard Business Review article by Michael Mainelli: “Blockchain Could Help Us Reclaim Control of Our Personal Data”.
Such a complex opt-in process that encompasses each member of a person’s social network would add much friction to the business model of a social media company like Facebook. But this is the direction that we should pursue if we are serious about transparency and responsibility by all data brokers – including the purveyors of social data. Ethical controls are needed to provide transparency and protection to individual consumers as well as to each member of their social network.
Originally published on LinkedIn Pulse
About the author
Dr. Henry Morris is a thought leader on analytic applications, artificial intelligence, and the need for ethical controls. He is currently researching and writing on business strategies to responsibly use artificial intelligence and advanced analytics for augmenting human intelligence with machine intelligence. How does this change the way data is acquired, prepared, and used in building algorithms and analytical models? What are the changes in work activities for business processes such as finance, procurement, operations, and human resources?
At IDC, the technology industry market research firm, Dr. Morris founded their research practice on analytics, coining the term “analytic applications”, now a standard tech industry category for software packaged to support decision making in a specific domain. He was Senior VP for Worldwide Software and Services Research and IDC Fellow for cognitive, analytics and Big Data. Prior to IDC, he was a technical writer and marketing specialist at Digital Equipment Corporation. There he wrote “Introduction to Database Development” on strategies for designing databases to support both transaction processing and analytical workloads.
Prior to his 35 years in the high tech industry, Dr. Morris was Assistant Professor of Philosophy and Religion at Colgate University. He holds a Ph.D. in philosophy from the University of Pennsylvania and a B.A. with distinction from the University of Michigan. This education has anchored a career that has ranged from writing and teaching about analytic philosophy to writing and lecturing about analytic software and intelligent systems.
Accelerate your organization’s journey to analytics maturity
Get the data sheet to learn how the Research & Advisory Network advances analytics capabilities and improves performance.