Anmelden Anmelden
MEHR

WIDGETS

Widgets

Gewünschte Seiten
Wer ist online?
Artikel-Werkzeuge

Modernizing data protection along with data processing technologies

Modernizing data protection along with data processing technologies

Wechseln zu: Navigation, Suche

by Susanne Dehmel, BITKOM Germany

In his article, Peter Schaar raises a great many questions that keep coming up with regard to the use of the Internet and big data, and generally in relation to the development of our digital society. The digitalization of our lives leads to both new dimensions in the amount of data processed and also new dimensions in our ability to process and access data. We need new methods of processing in order to cope with the existing and ever growing amounts of data we produce. This efficient way of processing data doesn’t only help us to keep our familiar analysis processes going, but, fascinatingly, also opens up completely new forms of analyses that enable us to implement formerly unthinkable scientific and economic applications. This means we have to re-balance interests. Therefore, we struggle with the question of whether we are able to incorporate new technologies into existing categories of data protection laws without losing too much of the scientific and economic potential of these technologies on the one hand and legal clarity on the other – or whether we need to rethink some data protection laws and principles in order to preserve or transfer the effective protection of our private sphere and our freedom of action in the future.

Not only does Schaar name a number of questions that arise; he also outlines some of the answers to them. A very important one is the setting of incentives for processing data in anonymized and pseudonymized form in order to keep the intensity of interference with basic rights and the risk of misuse as low as possible. The option of anonymized data processing is also significant for big data applications, as consent and purpose limitation can constitute barriers for applications that were not foreseeable at the moment of data collection or where there is no possibility of getting consent from the data subject. The note on the German Telemedia act (Telemediengesetz) and its definitions of “anonymous” and “pseudonymous” is helpful with a view to the ongoing consultations on the EU data protection regulation. It seems that there is no common understanding of these notions between member states; yet it would be very helpful to agree on the definition of these terms, also with regard to third countries. But if we ask for the increased use of anonymized and pseudonymized data as a means of risk limitation, we should be aware of the fact that the time and effort needed to achieve pseudonymization and anonymization must be feasible and adequate in relation to the risks. At the same time, it is also quite clear that with growing amounts of data about us available, it becomes significantly more difficult to anonymize data in a way that cannot be reversed by anyone anywhere, now and in the future. Incentives need to be offered for companies to set up safe surroundings – advanced anonymization technologies together with organizational measures. These incentives must be strong enough to motivate companies to undertake this effort. It must be possible to get out of the limitations of data protection law when any links that related information to a person were removed from the information. It should also be possible to handle pseudonymized data flexibly when there is no indication of undue negative effects on the data subjects’ interests. Therefore, we definitely need an international concept with a relative definition of anonymity as we already have in German law, and it should be enhanced with provisions for privileged handling of pseudonymized data. Schaar also names a procedural instrument that should help to balance risks and interests of data processing within companies or other responsible organizations: privacy impact assessments (PIAs). From an industry perspective, this could become useful as we know similar instruments function well in other areas, such as compliance departments. In order to introduce, or rather extend, impact assessments concerning privacy, a legal obligation to use them might be helpful. But the law should not be too detailed on how these PIAs are to be conducted. There needs to be flexibility with regard to the detail and depth of a risk assessment, depending on the nature and context of the data processing process or product and the privacy risks that can be expected. Best practices should be collected. Industry could work together with stakeholders who represent the interests of data subjects to develop standards for characteristic procedures and contexts in a process of self-regulation or co-regulation.

Such risk assessments can help to realize the concept of “privacy by design” in all privacy-critical processes and products and combinations of them. This concept is important for an effective realization of data protection in an economic way. The goal is to keep data protection in mind from the start and also while developing new products or services. Thus it is possible to implement measures that keep privacy risks to a minimum and/or give consumers and companies the opportunity to consciously decide upon the use of their data with regard to a certain service or use of a product. The concept of privacy by design is, in my view, more important and more helpful than the concept of privacy by default, as the latter only concentrates on the stage of delivering the product or service and automatically puts data protection at the top of the consumer’s priority list, thus tending to patronize him. Quite often, consumers have to decide between optimum user friendliness /convenience and optimum data protection, as both might not be achievable at the same time. Using social networks as an example, some users want to be found by as many matching contacts as possible because their business might depend on these contacts. But others only want to use the network in order to share information with people they already know. Both groups of users expect to be able to do so in a convenient way without much bother beforehand, but their preferred privacy settings will be different. Privacy by design should mean combining optimum convenience with optimum privacy, if possible – but if this is not possible, consumers should be given the choice of different options with different advantages and disadvantages. Plus, providers must explain to the customer what these advantages and disadvantages are.

The instrument of “data portability” also mentioned only makes sense for services where users administrate and/or display a lot of their own data and where the inability to extract this data in a fairly convenient way would have a prohibitive effect on changing service providers. It is not a data protection issue but rather a competition issue and should be treated as such. An undue extension or application of a so-called right to data portability on other services, such as online shops, or on data controllers in general could unduly burden many of these data controllers and might lead to other competition problems while not bring any advantages in terms of data protection.

Transparency is the keyword to which companies (and governments) should feel obliged and bound, as it is the basis for a fair relationship with customers and any other person whose data will be processed. Schaar is right in asking for policies that can be consumed in a reasonable time and manner. But to ask for this is still easier than to comply with legal provisions and reduce the information to the relevant bits for the consumer at the same time (who generally cares about data protection, but in many cases still cannot be bothered to go into the technical details of the services he is using). It is a complex task to set up global processes and products to ensure that they comply with all the diverging provisions in different states and with different purposes (security, fraud prevention legislation, civil law, data protection laws, etc.) and at the same time to have unified and transparent processes that can be easily explained to each customer. Nevertheless, companies need to earn and retain the trust of their customers, no matter if these are other companies or consumers. Trust is built up if you feel that your partner and his work are reliable and his actions are foreseeable. Therefore, companies need to work on their transparency in the form of fair communication to and with customers on the basic ways of handling their data. A fpredictable way of handling customer data might become one quality aspect of the products or services companies want to sell.

But all industry’s efforts to act as trustworthy data controllers might be thwarted if member states of the EU and third countries do not play according to the same rules that they impose on companies. The rules for intelligence services and other authorities that might want to access user data for some kind of security reasons should also be as clear and as transparent as possible. This is a difficult task for each government internally and in relation to foreign partners, as it might mean limiting its own power and information advantage. But a balance of security interests and an interest in the freedom of action of the individual has to be found. Otherwise, governments might not only face democratic problems but also economic deficits on the long run. Calls on the legislator to draw the exact boundaries between the acceptable and unacceptable use of personal data are valid for the actions of private individuals as well as for state authorities. Just as companies need clear rules within which they can act, state authorities and intelligence services need such rules. In both cases, the existence of clear and transparent rules also allows effective enforcement through adequate sanctions.

Despite the challenges we are facing, I think Peter Schaar is saying yes, the Internet and big data are compatible with data protection. And again I agree with him. Combining both is feasible if governments and industry make an effort – in defining transparent and fair rules for companies and authorities on the one hand and for data subjects on the other, in developing new technical and organizational measures that fit new data processing techniques, and in keeping up a factual social discourse on how we want to live in our digital world.

MIND-Multistakeholder Internet Dialog
MIND stands for Multistakeholder Internet Dialogue. The discussion paper series is a platform for modern polemics in the field of internet governance. Each issue is structured around a central argument in form of a proposition of a well-known author, which is then commented by several actors from academia and the technical communities, the private sector, as well as civil society and government in form of replications. all MIND-publications

Autor
Sebastian Haselbeck
comments powered by Disqus