Companies have invested billions just to utilise technology to analyse customer needs and wants with maximum accuracy. Technology has been developed in such a manner that it can be already seen as creepy. In this article, Robert Owen and Christopher Surdak go into why companies should limit their use of technology in business.
By now in America it has become trite to note the exponential growth of Internet-linked devices and their ability to improve our daily lives. Not only are we all familiar with them, most of us are utterly addicted to them.
Ironically, one of today’s biggest challenges is simply managing the power that is available literally in the palm of our hands. In this quest for ease through the utilisation of technology, our anonymity and privacy were sacrificed long ago.
Early adopters among us have incrementally made privacy sacrifices in order to access the conveniences found on so many of those shiny tech things we now consider necessities. While done willingly at first, as we come to truly understand the full depth and breadth of just what it is we are surrendering, this apathetic acceptance of privacy intrusions is not likely to last.
Anyone familiar with smartphones and apps has likely experienced the moment when customer intimacy suddenly becomes too intimate, too present and too invasive. It is no exaggeration that given contemporary algorithms and the trove of available customer data, companies have begun to move from mere marketing to a deeper level of manipulation than ever before possible. Anyone surprised by the revelation that Facebook and others have been emotionally manipulating their customers clearly doesn’t understand the fundamental business model behind such “free” platforms and services.
At a completely subconscious level, our behaviors and buying impulses can be influenced and exploited. As such, our free will may no longer be so free. At these times many of us feel unsettled, as though we’re being watched and analysed a little too closely, known a little too well, and it all becomes just a little bit creepy.
Over the past decade, social media, Big Data and mobile technologies have been integrated into our lives at a blistering pace. All told, thousands of companies have invested billions of dollars in these technologies. The purpose of these investments is obviously to make money and in order to do that, customer analytics have become increasingly sophisticated, accurate, persuasive and, ultimately, creepy.[ms-protect-content id=”9932″]
In this post-Snowden world the public at large may be wondering who, if anyone, is protecting them. Where is their right to privacy, to anonymity? Does it really even exist anymore?
While the revelations made by Snowden have placed governmental monitoring of the citizenry squarely in the sights of Congress and the Judiciary, there appears to be little hue and cry over private industry’s use of our data. Indeed, the state of the art in this arena is moving so rapidly that any attempt to legislate or regulate the use of consumer information in targeted marketing would be futile; provisions of such legislation would be obsolete before they had undergone a single subcommittee hearing.
This then begs the question, if our government cannot protect our privacy interests as they pertain to how companies engage us online, who is in a position to provide such oversight, and ultimately protection, from those who would sell us that which we might not actually want?
Arguably, it is those companies that best understand and make use of all this consumer data who may be best positioned to provide appropriate standards for its use. Major data generators such as Google, Facebook, Twitter, Microsoft, Amazon and etc., hold much of this consumer information and drive much of the innovation in its analysis and subsequent use.
They are also the leaders in creating new definitions of what is the appropriate use through constant evolution of their business models. As the custodians of both our data and our digital consciousness, it stands to reason that these businesses are also digital psyches.
Fortunately, there are precedents for such self-regulation by industry, including FINRA (Financial Industry Regulatory Authority) and the stock exchanges in financial services. Self-regulation works because these businesses know their industry better than any regulator could, and because they are driven by the profit motive to avoid losing the goodwill of their customers. Customers who are creeped out by a business’ conduct do not become repeat customers.
And so the businesses that leverage Big Data in their business models are best positioned to know the technical innovations and to implement proper safeguards to empower customers to control the degree to which they surrender their freedom in exchange for other benefits.
A Framework for Privacy Management
What might an industry coalition implement to assure customers that their best interests are being preserved? Such a framework might include at least four key attributes:
Disclosure: Consumers would be notified of what is collected, how frequently, by whom and how it may be used.
Transparency: Consumers would be notified when their data is being used, in real time, to influence their opinions or actions.
Recourse: Consumers must be allowed to change how their data is used according to their own comfort level. This means no more blanket authorisations when a user downloads an app or signs up for an account; rather, authorisation must be extremely granular and under the end-user’s control.
Monitoring: Consumers must be able to confirm that businesses respect their wishes. This means a Reagan-esque trust-but-verify capability.
By implementing such a framework, companies could continue taking advantage of customer data to drive business without running the risk of scaring off the very customers with which they are trying to connect. Perhaps most importantly, they would continue to engage in their regular operations without running the risk of a backlash and the ham-handed government regulation that would inevitably follow.
Over a year ago Amazon patented “predictive shipping,” where the company intends to ship products to customers in anticipation of an actual purchase. That is, Amazon will ship, and bill you for, things they know you want, whether you know it or not. Such is the state of the art in predictive analytics.
To some this may sound like a valuable innovation, like receiving a surprise birthday present every day of the year, while to others it may represent a new plateau of customer manipulation and intrusion. If this is the case for the majority, the backlash against such aggressive use of our personal data may be substantial.
Technology has no morality, it just is. The manner in which we utilise a given technology is what determines its morality, or lack thereof. Companies must temper their enthusiasm for Big Data analytics with a strong sense of responsibility, even humility, over the power that they now wield with regard to customers.
If they do not, they risk offending them and awakening the sleeping giant of government over-regulation and eventual destruction of the business model on which their entire business ultimately rests.
About the Authors
Robert Owen is Partner in Charge of the New York Office of Sutherland, Asbill & Brennan LLP. He has decades of commercial litigation experience and is an experienced teacher of trial skills. Chambers & Partners consistently ranks him as one of five first rank leading individuals in eDiscovery in America.
Christopher W. Surdak, JD, is the author of the book “Data Crush,” getAbstract’s International Book of the Year 2014. He is a technology evangelist specialising in the organisational and legal impact of new technologies.