13 March 2018

Africa: Why We Need Machines to Learn to Avoid Discrimination

Photo: Bautsch/Wikimedia Commons
(file photo).

United Nations — The opportunities that artificial intelligence (AI) can unlock for our world - from discovering cures to diseases that kill millions each year to significantly reducing carbon emissions - are growing at a shocking pace.

Technology that leverages the ability of machines to learn from vast quantities of data and use those lessons to make predictions (a subset of AI technology called machine learning (ML)) is already enabling pathways to financial inclusion, citizen engagement, affordable healthcare, and many more vital systems and services.

Every day, we are uncovering new ways of using machine learning to improve people's lives, and oftentimes we can translate those discoveries into real-life impact in a matter of weeks or even days. Machine learning is one of the most powerful tools humanity has created - and it is more important than ever that we learn how to harness that power for good.

A lot of the excitement surrounding AI systems has to do with automation: what happens when robots take our jobs, or take on military roles, or drive our vehicles for us? One dimension of automation that receives less attention is the automation of decision making.

Machine learning technologies are already making life-altering decisions for human lives on a daily basis. In New York City, machine learning systems decide where garbage gets collected, how many police officers to send to which neighborhoods, and whether a teacher should keep their job.

Learning not to discriminate

As we empower machines to make critical decisions about who gets included and excluded from these types of vital opportunities, we need to be aware, cautious and deliberate to prevent discriminatory outcomes.

After all, machine learning is only a tool, and the responsibility falls on people to use this tool ethically. In other words - to design and use machine learning applications in a way that not only improves business efficiency but also promotes and protects human rights.

While using technology to automate decisions isn't a new practice, the nature of ML technology- its ubiquitousness, complexity, exclusiveness, and opaqueness- can amplify long standing problems related to unequal access to opportunities.

Not only can discriminatory outcomes in ML undermine human rights, but they can also lead to the erosion of public trust in the companies using ML technology.

These risks are not going anywhere unless we do something to address them by evaluating the ways discrimination can get built into ML systems, and intervening accordingly to get these systems to 'learn' not to discriminate.

What happens when machines learn to discriminate?

Most of the stories we've heard about discrimination in machine learning come out of the US/ European contexts -- including media coverage of events like a Google photo tagging mechanism that mistakenly categorized an image of two black friends as gorillas, or predictive policing tools that have been shown to amplify racial bias.

In many parts of the world, particularly in middle and low income countries, the implications of using ML to make decisions that fundamentally affect people's lives -- without taking adequate precautions to prevent discrimination -- are likely to have far reaching, long-lasting, and potentially irreversible consequences. Already we're seeing how this can look:

- There are now ways for insurance companies to predict an individual's future health risks. At least two private multinational insurance companies operating in Mexico today are using machine learning to figure out how they can maximize the efficiency and profitability of their operations. The obvious way to do this in the health insurance field is to get as many customers who are healthy (i.e., low cost) as possible and deter customers who are less healthy (i.e., high cost).

We can easily imagine a scenario in which these multinational insurance companies, in Mexico and elsewhere, can use ML to mine a large variety of incidentally collected data (from shopping history, public records, demographic data, etc.) to recognize patterns associated with high-risk customers and charge those customers exorbitant and exclusionary costs for health insurance. Thus, a huge segment of the population - the poorest, sickest people - would be unable to afford insurance and deprived of access to health services.

- In Europe, more than a dozen banks are already using micro-targeted models that leverage machine learning to "accurately forecast who will cancel service or default on their loans, and how best to intervene." Consider the example of an engineer building an ML system to classify mortgage applicants in India who chooses to weight variables related to income with greater importance than variables reflecting timeliness of past payments.

Chances are that such an ML application would systematically categorize women (especially those who are further marginalized based on their caste, religion, or educational attainment) as less worthy of a mortgage loan - even if they are shown to be better at paying back their loans on time than their male counterparts - because they historically make less money than men do.

While the algorithm might be "accurate" in determining, which applicants make the most money, it overlooks crucial, context-specific criteria that would contribute to a more accurate and more fair approach to deciding how to provide the often-crucial opportunities afforded by mortgage lending.

What companies can do

These scenarios tell us that while Machine Learning can do incredibly good things for this world, those benefits are not inevitable. We need to look closely at the ways discrimination can creep into ML systems, and the ways that companies can act proactively to secure a bright future for ML.

To that end, we've arrived at 8 things all companies involved in machine learning can and should do to maximize the shared benefit of this game-changing technology while minimizing real risks to human rights:

1. Develop and enhance industry-specific standards - for fairness and non-discrimination in ML.

2. Improve company governance - through internal codes of conduct and incentive models for adherence to human rights guidelines.

3. Assess wider impacts - map out risks before releasing an AI system, throughout the lifecycle of ML products, and for each new use case of an ML application.

4. Take an inclusive approach to design - ensure diversity in ML development teams, and train ML designers and developers on human rights responsibilities.

5. Optimize ML models for fairness, accountability, transparency and editability - include fairness criteria and participate in Open Source data and algorithm sharing.

6. Monitor and refine algorithms - monitor ML model use across different contexts and communities, keep models contextually relevant, and organize human oversight.

7. Measure, evaluate, report - where ML is used in circumstances where it interacts with the public and makes decisions that significantly affect individuals, ensure that appropriate notices are provided.

8. Provide channels to share ML impact transparently - establish open communication channels with representative groups of the people that ML applications can affect.

If we want to work together to "shape a future that works for all by putting people first, empowering them and constantly reminding ourselves that all of these new technologies are first and foremost tools made by people for people," (as WEF Executive Chairman Klaus Schwab calls for), we need to design and use machine learning to maximize the shared benefit of this game-changing technology while minimizing real risks to human rights.

Africa

England's Accountancy Institute Allays Fears Over Africa's Debt

Several African countries have reported a rise in their gross domestic product (GDP) despite fears over increased debt… Read more »

See What Everyone is Watching

Copyright © 2018 Inter Press Service. All rights reserved. Distributed by AllAfrica Global Media (allAfrica.com). To contact the copyright holder directly for corrections — or for permission to republish or make other authorized use of this material, click here.

AllAfrica publishes around 800 reports a day from more than 140 news organizations and over 500 other institutions and individuals, representing a diversity of positions on every topic. We publish news and views ranging from vigorous opponents of governments to government publications and spokespersons. Publishers named above each report are responsible for their own content, which AllAfrica does not have the legal right to edit or correct.

Articles and commentaries that identify allAfrica.com as the publisher are produced or commissioned by AllAfrica. To address comments or complaints, please Contact us.