The need for versatility in data protection
The need for versatility in data protection
by Rafik Dammak, Member of the Steering Committee, Internet Rights & Principles Coalition, Tokyo
Big data is an evolution bringing new opportunities for businesses, but without a clear benefit for users. It is an example of the evolution of the technological threat against privacy. However, it is still difficult to apprehend it and separate the hype from the reality of this trend. At the same time, we have to preserve and protect the right to privacy, regardless of any changes. There is a duty to do so even when things are still fuzzy and changing quickly. But the hype may distract or prevent us from enforcing an effective regulation. We definitely need a different approach to respond to those challenges in the context of big data or endless technological progress and guaranteeing privacy.
Firstly, recognizing the limitations of what we have now as tools as outlined by the author is a prerequisite to finding the appropriate answers. Secondly, I would like to suggest a different perspective as a member of civil society and, perhaps more relevant here, as a software engineer concerned with building and designing sustainable systems. Therefore, I would like to use software engineering metaphors and principles.
Big data is enabled by the progress of technologies such as the Internet of Things and cloud computing, in addition to the current collection of data by different IT systems, regardless of their primary purpose or usage. However, existing data protection regulations or frameworks were designed to cope with “legacy” and traditional IT systems, processing personal data for specific purposes. Big data made a breakthrough in terms of using massive data from different sources, new technologies and platforms and adding more advanced algorithms and statistical and mathematical models. Let’s make a comparison between the regulation and the issue to be regulated. Big data or cloud computing, for example, are about scalability and planning for the continuing expansion of storage, computing, networking resources and data, while regulation tends to respond case by case in ad-hoc manner to anything new (and usually when it is already too late). It is more a reactive than a proactive approach. What we can conclude is: existing laws fail to “scale out”, to be interpreted or adapted to cover new use cases, new technologies and applications adequately. An effective and enforceable regulation needs to be at the same pace and aligned with what is regulated.
So, can data protection and privacy borrow the scalability principle in order to be able to handle the next technology threat to privacy? Can data protection regulations be conceived and designed to be effective many years ahead? Can data protection laws be built iteratively and be evolutionary? Can the data protection framework be able to cope with evolving threats to privacy Yes, it is possible: if we “design” laws to include new cases, such as when a system can add a new component. It is possible if we make regulation iterative. A new law is already a “legacy” product when it starts to be applied. “Upgrading” data protection laws is a continuous effort; principles remain but responses change.
The author advocates having a strong and flexible data protection legal framework, but that is far from being enough. Data protection authorities need to be able to predict changes, to plan for responses when needed. Using the software metaphor again, data protection must be updated regularly with new features to cope with the new realities, and the designer or the legislator in this case needs to do so often.
On the other hand, even if the laws, regulations and legal frameworks are here, the question is about the capacity and readiness of data protection authorities. What about the skills needed in the human resources who are supposed to comprehend, understand and enforce the rules on diverse applications or instances of big data? Even businesses are having a hard time finding data scientists and experts on big data for building and using those oceans of data. Without adequate expertise, data protection agencies or commissioners will be unable to detect new infractions and irregularities.
The author advocates privacy by design and giving control to users. Again, it is not enough. It is not just a problem of awareness or knowledge about privacy rules and practices by developers – it is about forgetting who matters: users. In agile software development, everything is about users. But when it comes to data usage and processing, companies or start-ups tend to focus on their business models and ignore the fact that they need to build user-centric and friendly systems first. Being user-centric must be systematic. When the focus is shifted to the right spot, privacy by design practices would be effective. Finally, this raises a missing question that we need to respond to: how much innovation is permitted without undermining or threatening privacy? We must avoid framing privacy as antagonistic to innovation or lowering the standards of data protection to match the innovation. In fact, innovation should be thought of as a way to improve data protection, to strengthen privacy as a right in the Internet. Any innovation failing to do so is only a regression of rights and a backward move. So, can data protection regulations make big data as an innovation beneficial to users and citizens?
MIND-Multistakeholder Internet Dialog |