How To Prevent Discriminatory Outcomes In Machine Learning

As machine learning (ML) systems continue to improve, its integration to systems making up the society becomes more seamless. Right now, ML is involved in making critical decisions such as court decisions and job hirings.

Without a doubt, using ML in these processes will lead to more efficiency. With a good design, ML systems can also eliminate the biases humans have when it comes to their decisions.

On the other extreme, this integration could end up really ugly. Trained with data with underlying biases on race, gender, or other factors, ML can amplify these biases and further perpetuate discrimination.

How do we then make sure that these systems will not end up violating our rights? World Economic Forum (WEF) answers this in their white paper.

GSuite

The Challenges

On the nature of ML

ML is ubiquitous in our society, especially in developed countries like the United States and Europe. However, these systems are highly complex and proprietary. These factors render them as black box systems, with people not really aware of their inner workings. This strips transparency and auditability in these systems. Understandably, this could sow distrust in this technology.

On the data used to train ML systems

Data isn’t always widely available. Typically, corporations keep the data they collect in private. With this, entities with data and the expertise to harness it will have the advantage when it comes to developing ML systems.

As earlier mentioned, these data sets may have underlying biases and errors which may further entrench discrimination.

On the design of ML algorithms

Apart from the data used for training, the algorithms used to develop the ML system can also pose some risks for discrimination. These risks can be attributed to:

  • Wrong choice of algorithm
  • Building an algorithm with inadvertently discriminatory features
  • Absence of human oversight and involvement in the use of the ML system
  • Lack of understanding of how the algorithm works, leading to discrimination being overlooked
  • Unchecked and intentional discrimination
LEARN MORE  Here’s How Data Could Make Our Cities Safer

The Responsibilities of Businesses

Given these challenges, what can businesses do to combat discrimination? WEF highlights four focal points:

 

Figure 1. Four central principles to combat bias in machine learning and uphold human rights and dignity, Adapted from “How to Prevent Discriminatory Outcomes in Machine Learning”, by World Economic Forum, March 2018, retrieved from https://www.weforum.org/

 

  • Active Inclusion: Business entities should actively ensure inclusivity in the development of ML applications.
  • Fairness: Fairness should be prioritized in the development of machine learning systems.
  • Right to understanding: Businesses should be able to disclose and communicate how ML is being used to make decisions that affects individual rights.
  • Access to Remedy: Platforms to remedy discriminatory outputs of ML systems such as checking mechanisms and reporting processes must be set in place.

Policies and regulations usually lag behind development. On the other hand, businesses are directly involved in the development of ML systems so they are always on track. With this, business entities have to make sure that they integrate these principles in order to not contribute to the culture of discrimination.

Given that they have access to massive amounts of data and the tools to develop ML systems, they also have a huge obligation in upholding human rights in the development and deployment of these.



For enquiries, product placements, sponsorships, and collaborations, connect with us at [email protected]. We'd love to hear from you!



Our humans need coffee too! Your support is highly appreciated, thank you!
Total
0
Shares
Previous Article

How To Borrow Money Without Talking To A Human

Next Article

Food Additives That Students Should Not Be Afraid To Consume

Related Posts
Total
0
Share