Homo Digitalis 05 | A Public Administration Route to Algorithmic Accountability

Hello,

Welcome to the fifth edition of my newsletter, Homo Digitalis. In the last edition of this newsletter, I spoke about how the regulatory conversation about AI has been disproportionately focused on the speculative harms of generative AI, including the recent AI advisory of the Indian government. An opinion piece in the New York Times last year cited a survey of over 700 top academics and researchers, half of  which thought that there was a “10 per cent or greater chance of human extinction (or similarly permanent and severe disempowerment) from future A.I. systems.” This news was covered by other reputable sources including The New Yorker and Vox. The fear-mongering about Artificial Intelligent systems (AI), most vividly at display in the UK’s AI summit last year, is perhaps only matched by its hype, both of which ascribe sentient, superhuman abilities to systems which are essentially statistical models running on very large quantities of data. These over-inflated fears turn our much-needed attention away from the real-life dangers and risks of AI, which are numerous and far more prosaic. As we write this, machine learning algorithms (ML) intermediate our use of everyday services which determine how we spend our time, what content we read, view and consume, what opportunities we pursue and perhaps, what we think. Despite their ubiquity in diverse aspects of life, the common denominator of AI/ML remains their opacity.

In Focus: A Public Administration Approach to achieving Algorithmic Accountability

Last month I published a series of three long-form essays under my fellowship project with Mozilla Foundation, Knowing without Seeing, which look at how traditional public administration principles can guide us in the regulation of everyday instances of AI. In these essays, I eschewed regulatory innovation and instead tried to centre principles which have served us well for several decades in approaching the governance of technological systems which intermedia our interactions with public and private parties. As AI becomes more pervasive across sectors and institutions, if not an obvious, certainly a useful parallel can be drawn between the delegation of decision-making from humans to machines and the delegation of power from identifiable and elected lawmakers to recondite and impenetrable administrative institutions. Given the labyrinthine nature of its subject, the primary business and as Elizabeth Fisher puts it, the “central obsession” of administrative law has been accountability.

The Suitability of Administrative Law in AI Regulation

In the first essay of this series, I demonstrate how the intricate discipline of administrative law and the principles which have evolved in common law to hold its Daedalian institutions to account, are instructive in how we must make AI, whether used by the public bodies or private parties, more scrutable.

The domain of administrative law deals with secrecy and disclosure and offers several guidelines on why, when, and how to disentangle the dramatically concealed actions of public bodies. The reasons behind secrecy range from the deliberate guarding of secrets to the complex and technical nature of public administration, from institutional design to evade accountability to less than meaningful transparency measures where access to information does not translate into understandability. For any researcher or practitioner of algorithmic transparency, these obstacles to meaningful transparency in the domain of administrative law will sound eerily similar.

When we seek more transparency, what we seek can vary immensely based on the context and circumstances of the institutions and stakeholders involved, the nature of the information we are dealing with, how the information is being delivered, who the recipient of the information is and the consequences of disclosure or inversely secrecy. In my essay, I present four essential factors that determine the design of any transparency initiative, as drawn from public administration but having an equally relevant role for algorithmic transparency.

The first is why transparency is being sought. In another essay, I have argued that it may be perhaps most accurate to consider transparency as occupying a space somewhere between the primary and second virtues (see MacIntyre’s conception of virtues). Transparency is key for the public to achieve greater autonomy and facilitate greater accountability of institutions. Even though it is useful to view its largely as an instrumental value, a means to other ends, it does not make sense to place a burden in each instance to demonstrate that transparency will lead to some policy goals. All else being equal, it is reasonable to presume that more publicly available information is preferable to less publicly available information, all else being equal.

The second question is what needs to be transparent. It is useful to think of transparency as a sliding scale with full secrecy at one end and complete disclosure at the other. The design of the transparency initiative is often about making choices about what needs to be disclosed. It is rarely that complete disclosure is warranted or even desirable. The laziest approach to deciding what to disclose is to only reveal a resource that already exists, an approach for instance, often followed by the Indian government in response to Right to Information applications. In many cases, information is created primarily for the purpose of disclosure, for example, quarterly and annual financial accounts are compliance and reporting requirements in several contexts. The duty to give reasons for administrative and judicial bodies requires the explicit recording of reasons and the creation of an essential resource. Below you can see examples.

The next question is when should something be made transparent. There are two facets to the temporal nature of transparency. The first is the trigger for transparency — when does the disclosure or its requirement kick in. The second is the time period for which it remains transparent and then reverts to secrecy.

The final question is who is the recipient of transparency. When we think of transparency, the first instinct is to imagine all citizens or consumers as its audience. In the case of freedom of information legislations, this is true. In some cases, what is being made visible, by its very nature, require expertise and training to consume, such as financial reports, or technical assessments. In such case, even though most people cannot be expected to meaningfully engage with the documentation, they rely on other experts to educate the public about it.

The rich tradition of administrative law and the complex body of public administration that it governs provides us with very useful questions as we approach algorithmic transparency. As much as is made of the inherent opacity of AI systems, we often tend to approach its transparency obligations with a need to reinvent the wheel. On the contrary, we need to more stringently rely on the ways to approach transparency that have served us well.

AI Transparency in the Public Sector

In the second essay of the series, I looks at how administration law principles can guide creating accountability mechanisms whenever AI is used in public systems. As public authorities begin to adopt AI into decision-making processes for public functions and begin to determine the ideal form of intervention(s), the extent to and the way in which decision-making capabilities can and are delegated to AI need to be questioned from the perspective of its transformative impact on justice, civil liberties, and human rights. The justifications for transparency in the exercise of public functions draws from standards of due process and accountability evolved in administrative law, where decisions taken by public bodies must be supported by recorded justifications, a consequence of both procedural and substantive procedural fairness. A further extension of this principle is the need for administrative authorities to record reasons to exclude or minimize arbitrariness In some jurisdictions such as the UK and the US, there are statutory obligations that require administrative authorities to give reasoned orders.

The introduction of an algorithm to replace, or even only to assist, the human decision-maker represents a challenge to this assumption and thus to the rule of law, and the power of legislatures to decide upon the legal basis of decision-making by public bodies. Marion Oswald argues that “administrative law—in particular, the duty to give reasons, the rules around relevant and irrelevant considerations and around fettering discretion—is flexible enough to respond to many of the challenges raised by the use of predictive machine learning algorithms and can signpost key principles for the deployment of algorithms within public sector settings.”

Over a period, two sets of reasons have emerged in case law for the imposition on administrators of a duty to give reasons. The first set of reasons is instrumental in nature—they contribute to other established objectives. These objectives include the accuracy rationale. This means that the public body will make more accurate decisions when they are required to think about, and set down reasons for, their decisions in a systematic manner. Another objective is the ‘review rationale’: courts and organs of review of these decisions often recognize that an unreasoned decision is very difficult to review. A public confidence rationale is also featured in justification for the duty to give reasons. The provision of reasons by public authorities is essential for demonstrating that laws are being applied consistently and carefully, an extension of the legal principle that justice must not only be done but also seen to be done. Aside from the instrumental reasons, we also see the duty of give reasons arising from its intrinsic basis in principles of fairness, which is central to administrative accountability.

If we consider the duty to give reasons, as informing the threshold for transparency, there may still be diverse ways in which it may be implemented. Where algorithmic systems use transparent model techniques such as linear regression models, decision trees, k-nearest neighbor models, rule-based learners, general additive models, and Bayesian learners, it would be possible to create ante-hoc explanations even before the deployment of these systems. This would be a prime example where the trigger for creation of transparency documentation can be designed, through contractual obligations between the public body and contractor, to precede the deployment or implementation. Conversely, for models which render themselves better to post-hoc explainability, the triggers can be the demand for algorithmic logic, or justifications for specific decisions such as local or example-based explanations.

A duty to give reasons does not end at the mere recording of reasons. The reason, as legal precedents have dedicated, must be intelligible and adequate to enable “the reader to understand why the matter was decided as it was and what conclusions were reached on the ‘principal important controversial issues.’” Further, the “reasoning must not give rise to a substantial doubt as to whether the decision-maker erred in law, for example by misunderstanding some relevant policy or some other important matter or by failing to reach a rational decision on relevant grounds.” For an administrative decision which was delegated to an algorithmic system either wholly or partly, there is a legal mandate to clearly demonstrate the considerations on the basis of which the decision has been taken. A human-in-the-loop supervisor should be able to ascertain that the considerations are relevant.

This requires two set of factors to be accounted for in the decision matrix. The first is the availability of the relevant considerations to a human agent in a form that they can comprehend. For instance, test set accuracy could be used to evaluate the considerations at play. Guestrin et al. take a pragmatic approach where they set out a general scheme to provide, for any decision by a learning algorithm, a locally linear and interpretable approximation to that answer or decision. In the specific dataset they looked at, despite satisfying high accuracy on validation data, it contained features that did not generalise. Therefore, the high accuracy was a true indicator of its performance outside of test setting in the real world. The second requirement would be the presence of a human actor with prior knowledge and domain expertise to identify the irrelevant considerations to the task at hand. In such cases as above, where individual prediction explanations are possible, the human-in-the-loop will be able to determine if a decision was based on arbitrary or irrelevant reasons, and thus, needed to be rejected.

Administrative Law in governing private AI

For the past few years, several scholars have written about the rationale for the regulation of artificial intelligence not only through public law, but also through private law. There have been several piecemeal examples of regulations which regulate aspects of machine learning enabled algorithms. However, the first comprehensive attempt at regulating AI has emerged in the form of EU’s AI Act. The AI Act makes the case for algorithmic transparency in private sector use as well. Instead of using a rights-based framework, it adopts a risk based approach to regulation, classifying different uses of AI systems based on the degree of risk they pose, and having a graded set of regulation ranging from no mandatory obligations (with encouragement to create codes of conduct) for other low-risk AI systems at one end, and prohibition on use cases which involve unacceptable risks (manipulative subliminal techniques, exploitation of disability, social scoring purposes, remote biometric identification by law enforcement) at the other end. In the third essay, I write about how administrative law principles can be applied in governing private Ai under the AI Act.

Perhaps the most critical transparency provision of the AI Act, Section 13 sets transparency requirements for high-risk systems. Aside from being the most important transparency requirement in the AI Act, and thus, having extreme significance for AI systems deployed in the EU, it also offers a useful and replicable template for future regulation in other parts of the world.

The AI Act classifies high-risk systems in two categories. The first are AI systems which are embedded in products subject to third-party assessment under sectoral legislation. Here, it refers to AI systems that can be used as a safety component of a product, covered by any of the nineteen EU regulations designed to harmonize standards for certain products across the market. This means that when an AI system becomes a part of a product which, independent of the AI Act, needs to undergo third party assessment, then it will be classified as a high-risk system. The regulatory assumption here is that the need of third party assessment signifies a high degree of risk The second category of high risk AI systems are stand-alone, but are deemed to be high-risk when utilised in certain areas, or deployed in any of the following high-risk verticals. These verticals areas include “critical infrastructures (e.g. transport), that could jeopardise the life and health of citizens; educational/vocational training, that may determine the access to education and professional course of individuals (e.g. scoring of exams); safety components of products (e.g. AI application in robot-assisted surgery); employment, workers’ management and access to self-employment (e.g. CV-sorting recruitment software); essential private and public services (e.g. loan scoring); law enforcement that may interfere with fundamental right (e.g. evaluation of the reliability of evidence); migration, asylum and border control (e.g. verification of authen-ticity of travel documents); and administration of justice and democratic processes (e.g.applying law to a concrete set of facts). Broadly speaking, this means that AI/ML systems which can harm a person’s health or safety or infringe on their fundamental rights are classified as high risk.

The operative part of the provisions sets a legal threshold— sufficiently transparent to enable users to interpret the system’s output and use it appropriately. If the proposal were to become law in its current form, its implementation would be complicated. Based on how we answer multiple questions about the nature of transparency required the implementation could vary. Article 13 ties transparency to ‘interpretability’, but there is little consensus on what interpretability means in XAI literature. One source defines it as the AI system’s ability to explain or to provide the meaning in ‘understandable’ terms to an individual. In XAI literature, understandability is often defined as the ability of a model to make a human understand its function without any need for explaining its internal structure or the algorithmic means by which the model processes data internally. [2]  Another source defines interpretability in a contrasting manner, requiring traceability from input data to the output.

It is worth noting that the standard of ‘interpretability’ is comparable to the administrative law standard of duty to give reasons that we discussed in detail here. Before delving into the XAI literature for suitable examples of methods that can deliver interpretability, it would be worth our while to first establish the contours of interpretability by tying it to the duty to give reasons.

Depending on the nature of algorithms and the functions that it performs, there could be multiple ways in which the ‘duty to give reasons’ test can be applied here. The duty to give reasons in an administrative context does not merely explain why a decision was taken or not taken in a vacuum but also records the knowledge of the context within which the decision can be understood. Thus, algorithm audits which do not technically serve as explanatory tools or interpretable models, can aid the delivery of the duty to provide reasons by explaining the broader context of the outcomes a system leads in its socio-technical context.

If we have to look for a comparable XAI technique to the output interpretability requirement, it is perhaps best served by the idea of global model explanations. One prominent way to classify XAi methods is by drawing a distinction between explaining a model locally (i.e., a single prediction) or globally which is the whole model. The right to explanation is analogous to global explanations except it is perhaps even a lower standard in that information about the algorithmic logic need not include information included in several global methods such as TCAV. The transparency duty in Article 13 of the AI Act most decidedly goes beyond it to also include output interpretability. Thus, it entitles a user to receive information about an output or a single prediction aside from global explanations.

Even outside of the purview of public law, several decisions are made by private platforms where they play arbiters of free speech, privacy and data protection, financial and health services, and access to information, news, and entertainment. The tried and tested concepts of reasonableness, necessity, risk assessment, and foreseeability which are well-entrenched in public law can suitably inform the nature of supervisions that humans can exercise in high-risk systems employing algorithms. The above concepts do not render themselves well to datafication or feature extraction. Therefore, creating suitable points for the intervention of human discretion is critical.

That is all for this issue. Until, next time.

Previous
Previous

Homo Digitalis 06 | Use of the voice app, Bhashini, in India’s Elections

Next
Next

Homo Digitalis 04 | The landscape of Facial Recognition Technologies in India