jueves, noviembre 28, 2019

About Privacy 2030: The Posthumous manifesto of the Patriarch of Privacy Intelligentsia

       Originally published in Linkedin.
One of the issues with contemporary legal education, specially legal education in countries which legal systems enjoy certain prestige, is a tendency (let’s call it a positivist tendency) to look down on policy discussions. Duncan Kennedy, one of the founders of the Critical Legal Studies movement, offered some interesting insights about this issue in his brilliant critique to legal education: 

“(…)in most law schools, it turns out that the tougher, less policy-oriented teachers are the more popular. The softies seem to get less matter across, they let things wander, and one begins to worry that their niceness is at the expense of a metaphysical quality called “rigor,” thought to be essential to success on bar exams and in the grown-up world of practice”.

When discussing the policy underpinnings of GDPR, for example, I have been accused by highly esteemed colleagues of something even worse: Of being very interested or even “very good” at the “philosophical questions”. Anyone who has gone to law-school knows that there is not an ounce of compliment in such a statement. 

Now, the reason I start this write-up with an apparent digression from the theme that I promised in the title is because Butarelli´s manifesto is very important in one very critical manner: It uplifts the status of policy discussions. It shows how critically important policy discussions are for legal practitioners and virtually anybody who works in the tech industry in the year 2019. No legal practitioner working in anything related to technology has a claim to be a well-informed legal practitioner if he/she has not read Privacy 2030 (Yes: Even if you practice at the so-called Magic Circle). I would argue that a similar statement applies to tech CEOs and I would submit that even if you are a cynical CTO secretly hiding enormous stockpiles of personal data in a removable hard-drive somewhere, you should read Privacy 2030 if only because it provides first-hand insights on how the enemy thinks. Let’s make no mistake: This is it. This is give or take the definitive compendium of all the aspirations, latent-dystopias and anxieties that give meaning to Data protection Law and Privacy Law in the European Union. 

In order to keep this write-up succinct, I will refrain from examining the main themes of the six chapters of the manifesto. Instead, I will suggest a few more reasons why we should celebrate Privacy 2030 and I will propose an incipient critique. Let’s start with the first: The manifesto seems innovative in bringing about a policy aspect that is still foreign to the typical ESG discussions that one may encounter nowadays in the context of technology, a space where companies that are not hardware manufacturers tend to be perceived as greener and where that item of the due-dilligence checklist is rapidly ticked-off. I will quote directly from the Manifesto:

“The religion of data maximisation, notwithstanding its questionable compatibility with EU law, now appears unsustainable also from an environmental perspective (…)”

So, while the manifesto does not abound in hard evidence for the premise that data maximization has a tangible effect on climate change, the author offers some interesting suggestions about the places where that question might lead us: A “Digital Green New Deal”, perhaps. And this bring us to one more reason why the manifesto is an important read in the times that we live: Given how ambitiously idealistic it reads, it shows that even in our time it is possible to be both an unrestrained titan of humanism and a world-class technocrat. It occurs to me that Butarelli was one of the last fellow liberals. This is a slight digression but: If specimens of this endangered species are to be found only as a byproduct of the European project, I am tempted to think that we have one more critical reason to preserve it. I will write no more hagiography because there is enough posthumous praise circulating at the moment, but this is one of those men whose hagiography does not strike me as particularly annoying. We need many more Butarellis in the generations to come. 

Back to the subject that occupies us, the manifesto is also interesting in the sense that it proposes some last-resort measures that need to be on the table if we are to make sure that certain technologies are harnessed for the good: 

“Impose a moratorium on dangerous technologies, like facial recognition and killer drones, and pivot deployment and export of surveillance away from human manipulation and toward European digital champions for sustainable development and the promotion of human rights.”

Moratoriums sound somewhat radical, just like Alexandria Ocasio Cortez’ suggestion of sitting Facebook out the 2020 elections if they don’t assume responsibility for the way their business affects democracy. But even if one thinks (as I tend to do) that corporations should not be put in a position to decide what is truthful enough for people to read, these last resort measures seem necessary to ensure that all stakeholders take the policy discussions at hand very seriously.

One final reason for giving the manifesto a good read: There is a very brief afterword by Shoshana Zuboff, whose work was first introduced to me by the always acute Tim Walters from the Content Advisory at a conference (yes, one can actually learn new stuff at conferences). Her afterword is not a particularly compelling piece but it does work as a privacy-contextual introduction to Zuboff´s notion of Surveillance capitalism, which has been portrayed as some sort of inadvertent marxism by Evgeny Morozov in this great review. Technomarxism of the very interesting kind, I feel like writing.

And now to the incipient critique:

The manifesto tends to follow the typical discursive recipe of the contemporary policy discussions about privacy in the EU: It devotes many words to listing and describing a good number of latent dystopias, of extremely undesirable states-of-affairs that we must urgently prevent by means of regulation. On the other hand, it devotes much fewer words and exactly one page to propose a “10-point plan for sustainable privacy". Let me try to be fair: manifestos do not need to offer all the answers and Privacy 2030 does propose some brighter views on technology, but still, its decided effort to unearth, expose and imagine all potential risks and pitfalls of technological advances is dangerously close to a Neo-luddite impulse of sorts: A tendency to believe that technology is mostly and mainly a source of latent dystopias.

Now, precisely because technology is not just a source of dystopias but also an important instrument for innovation and progress, it would be wise to look at it with much more sympathy. Zuboff´s afterword is right in calling-out the lobbyist talking points for what they are: Regulation will not necessarily stifle innovation. But there is good evidence that bad regulation will. In order to have a civil discussion about the future of privacy and the regulation of technology, it would be great to start by recognizing that not every bit of optimism is corporate propaganda and that skepticism about the role of regulation to solve these problems is not always an exercise of techno-solutionism.

Privacy 2030 is a very important read, but I want to insist on this: A hysteric perception of technology and the world we live in will most certainly lead us to a kind of policy discourse so desperate to rule out latent dystopias that it prevents us from seizing the tangible opportunities in the present. I would echo Zuboff´s invitation: Let’s make sure that we fight all the fires together, but let’s make sure we leave some room for the flame of progress.

You can download the manifesto directly from the IAAP resource center here.

Disclaimer: The opinions expressed by the author in this article are strictly personal and do not reflect the official position of the Mash Group or any of its directors or employees. Any threatened law-suits, hate-mail or angry rebuttals in response to this write-up are ideally to be addressed to the author directly, in the comments. :)

lunes, noviembre 25, 2019

About EBA´s guidelines on Loan Origination and monitoring


The European Banking authority (EBA) is about to issue a set of guidelines on loan origination and monitoring with a very broad scope of application. In fact, Numeral 12 of the draft signals EBA’s intention to make all the rules in Section 5 (all rules pertaining to Loan Origination Procedures) applicable, inter-alia, to all creditors as defined in literal (b) of the Consumer Credit Directive. Put simply, that would entail that any natural or legal person who grants or promises to grant consumer credit in the course of his/her trade, business or profession in the European Union would be subject to the rules governing loan origination procedures as set out in the guidelines.

I tend to think that such a broad scope of application raises the question of whether the EBA is exceeding its mandate under regulation No. 1093/2010, but at first glance that seems to be a rather complex matter that merits its own write-up and that could well be the subject of lively discussion amongst esteemed colleagues in the near future. Given that this new set of guidelines will also apply to Fintech firms in the credit space, I would like to focus on what I see as the substantive aspect of the matter: What do these guidelines mean for consumer credit providers who rely on automated decisions/processes for loan origination? I confess I am skeptical: I have argued in the past that the EBA did no favors to Fintech and open banking by issuing a set of regulatory technical standards that are not technologically neutral nor business-model neutral and that seem to cater directly to the talking points of certain actors who have little incentives to embrace the open-banking ethos of the PSD2.

Susanne Grohé from Aderhold (One of Europe´s most Fintech-savvy law-firms) hinted at one of the major shortcomings of the guidelines by suggesting that they appear to follow the premise that the use of technology in loan origination is merely a risk factor, dismissing the fact that technology has contributed and can further contribute to building more robust loan origination processes. I concur: The EBA displayed that sort of tech-adverse tendency in the RTS on Strong customer authentication and I believe it made that mistake again with this new set of guidelines. In this note, I would like to contribute to that discussion by pointing out some very specific rules proposed by the draft guidelines that are particularly problematic for consumer credit providers that heavily rely on automation in their loan origination processes. Let´s take a look:

Rule Number
Why is it problematic?
Institutions and creditors should have a sufficiently comprehensive view of the borrower’s financial position, including an accurate and up-to-date comprehensive view of all the borrower’s credit commitments (single customer view)
This rule is problematic on two counts: It is rather vague in the sense that it does not specify whether the comprehensive view in question must also include all borrower’s credit commitments with third parties, for example. If the latter is the correct interpretation, then the rule is even more problematic because it somehow assumes that consumer credit providers across the European Union are able to access some kind of database of outstanding credit commitments that is updated in real time by all consumer credit providers as they issue new credit to their borrowers. Such a database is a good idea perhaps, but it has not yet come to exist, so it is not reasonable to require consumer credit providers to be aware of all the credit commitments of a potential loan applicant with a good degree of certainty .
Institutions and creditors should apply metrics and parameters to have an accurate single customer view that enables the assessment of the borrower’s ability to service and repay all its financial commitments. 

This rule significantly worsens the problem that I mentioned immediately above. Not only does it presuppose an omniscient single customer view but it goes to the extent of requiring creditors to have a proper assessment of the borrower’s ability to service all its financial commitments.

Again, consumer credit providers would only be able to comply with this rule if they had access to some kind of omniscient database that would enable a view of all financial commitments of all potential applicants at any given time. That is in the realm of science fiction at the moment.

Arguably, the consumer credit provider could request this information directly from loan applicants, but it is very optimistic (to say the least) to think that consumer credit providers will be able to build accurate affordability analyses based exclusively on information provided by loan applicants who have a vested interested in getting a positive credit decision. Even if we assume zero cases of bad faith credit applications (where loan applicants hide any outstanding financial commitments, for example) the question is: How are consumer credit providers supposed to verify the information provided by the applicants? Should they make use of the omniscient database that seems to exist only in the imagination of the drafters?

The decision to approve or decline the loan application (credit decision), should be taken by the relevant credit decision-making body in accordance with the policies and proceduresand governance arrangements as set out in Section 4.3.
This rule seems to be oblivious to the fact that many consumer credit providers automate their credit decisions. In fact, this rule is so anachronistic that it seems to betray a sort of deeply rooted belief that credit decisions can only be made by some kind of hyper-enlightened credit committee that takes a look at every applicant’s paperwork and issues sentences with unimpeachable wisdom. Even worse, this rule seems to betray the assumption that committees take better decisions than, say, adequately programmed machines. This is one of the rules where the tech-adverse (or should we say tech-oblivious?) attitude pointed out by Susanne is particularly clear. 
Credit decision should be well documented, provide a record of views and reservations, especially any dissenting views, of the credit decision-making body members’. In case of a decision to approve the loan application, the credit decision should contain the information on the key features of a loan being offered to the borrower, including information on the amortisation, price, covenants and required collaterals. Such credit decision should be also the basis of the loan agreement.
Once again, this rule is baffling because it double downs on the problem that I mentioned immediately above. The underlying assumption that all credit decisions are made (or should be made) by human members of collegiate decision-making bodies is even clearer in this wording.

So, how did the EBA fare this time? I suppose it is not entirely fair to judge them merely on the merits of the first draft of the guidelines, but the consultation paper seems to betray the same brick-and-mortar worldview that permeated the Regulatory Technical standards for SCA. Let’s remember that the European Commission did ask them in the past to amend their RTS on SCA to ensure that non-compliance by banks did not prevent AISPs and PISPs from offering their services to end-users.

This brick-and-mortar worldview is pervasive in these guidelines for loan origination and is clearly palpable in some of the rules that I listed above, which seem to be drafted with insufficient awareness of the current state of affairs of the very phenomenon they intend to regulate. If one is to regulate credit decisions in the 21st century, it is important to bear in mind that many (if not most) consumer credit providers automate their credit decisions and that AI will play a very important role in loan origination in the near future. More importantly, any regulatory effort in the 21st century should take into account that artificial intelligence might just be a powerful tool to overcome decision biases and to achieve economies of scale in many realms. Financial inclusion by means of access to credit, for example, will be much harder to scale up if regulation doubles-down on the strange notion that there must be a human revising every single credit decision in order to ensure its conformity with any standard, be it a responsible lending standard or a credit policy.

I submit that these draft guidelines are pernicious in one very critical way: They anchor the loan origination process in the past by conceiving it as the by-product of some kind of microcosm where the only right decisions are made by committees of the wise.

Disclaimer: The opinions expressed by the author in this article are strictly personal and do not reflect the official position of the Mash Group or any of its directors or employees. Any threatened law-suits, hate-mail or angry rebuttals in response to this write-up are ideally to be addressed to the author directly, in the comments. :)