Algorithmic Unfairness – Automated decision making

Proprietary algorithms are developed to analyse vast amounts of data for trends, patterns and hidden nuances by businesses. These algorithms are typically trade secrets developed by the businesses to aid them in taking commercial decisions or could be their business model itself. Take, for instance, data trend which talks about the accident insurance worthiness of an applicant which could analyse the applicant’s driving behaviour, history, general rate of accidents caused by someone in same age group, location, etc. It certainly helps the insurance company to choose the “right applicant” who deserves insurance should there be an accident, or “profitable applicant” who will not cause accidents. There is a fair amount of discrimination and profiling that comes by as a result.

This picture says a thousand words. Lauren Smith has this table in the article blogged at Future of Privacy Forum,( which classifies the various discrimination and profiling that comes by.

It is opined that deep learning neural networks provide for great predictions but is not very transparent in providing for a causal audit trail.  The question remains for now on better predictions versus transparency.

Artificial Intelligence tools have been around for a while and used extensively in various industry verticals. In Loomis v. Wisconsin, the case challenged the use of proprietary, closed source risk assessment software in sentencing Mr Loomis to prison. The case alleged that the software “Correctional Offender Management Profiling for Alternative Sanctions” or COMPAS violates due process rights by taking gender and race into account. The algorithms used were considered trade secrets and the causal audit process was not clearly known to the Judge.

GDPR, General Data Protection Regulation in the EU, has provided for certain “decisional privacy rights”, ie. the privacy of certain significant self-defining choices.

Article 22 of GDPR says that “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” The Article further provides for certain exceptions. The law will create a ‘right to explanation’, where a user can ask for an explanation an algorithmic decision that was made about the user.

It may be noted that GDPR is quite sprawling and has wide territorial scope to include any place where the data is processed.

Explaining explanations: If a human Judge were to give reasons for the decisions taken, then can the AI explain itself as well?The New York Times magazine also recently asked the question- Can AI be taught to explain itself?

DARPA’s explainable artificial intelligence provides for some answers on the “explainability”.

The new machine learning, AI systems will have to have a strategy to include an ability to produce explainable models and in a way that humans can understand and trust.

  • Transparency of how a decision is made is not enough. The explanation should cover “why” the decision was made. This might include human intervention.
  • “Outcome” is not enough. The explanation should cover how the points are assigned and why the points are assigned the way they are assigned
  • There should be a meaningful information about the logic involved.
  • The model should be interpretable and thought through while developing the model.

So, when that insurance application or credit application is rejected, there should be an explanation provided by the insurance company or the bank, on why and how the decision was made.

The Whitepaper of the Committee of Experts on Data Protection Framework for India, was released recently inviting public comments asks the question “Should there be a prohibition on evaluative decisions taken on the basis of automated decisions?”

Our humble answer is no, it is not “prohibition” but the regulatory framework should consider “explanation”.

Author: Sharda Balaji