Is AI Bias a Corporate Social Responsibility Issue?

Is AI Bias a Corporate Social Responsibility Issue?
04 / 11 / 2019
By Harvard Business Review - Mutale Nkonde, the Executive Director of AI For the People - -

In late 2018, Amazon discontinued the use of their AI-based recruitment system because they found that it was biased against women. According to sources close to the matter, the tool gave low ratings to resumes with the terms “woman” or “women’s” in applications for technical roles, and went as far as downgrading applicants from two all-women’s colleges, Harvard Business Review reported.

This problem is not new. In 2003, the National Bureau of Economic Growth (NBER) conducted an experiment to track the presence of racial bias in hiring. In the test, they sent out two sets of fictitious resumes with identical information about education and experience. One set of resumes had African-American sounding names, and the other set had Caucasian-sounding names. They found that Caucasian “applicants” got 50% more callbacks than their African-American counterparts, which renews the question: How can we create more fair and equitable recruitment practices? Algorithmic recruitment systems were supposed to be the answer. It was argued that they remove human bias because their determinations are based on statistical predictions of which candidates are most likely to be a “good fit.”

However, this solution did not take into account how these algorithms actually work. In the Amazon case, the algorithms driving the automated recruitment tool were trained to flag strong candidates by identifying the key words most often used in the resumes of the company’s top performers. This seems logical, but it is actually where the bias creeps in. Algorithms cannot be trained to understand social context. In the case of employment, workplace politics often play a role in performance evaluations. For example, some employees may be evaluated as top performers because they are related to a senior executive, have seniority, or are in the same social groups as their managers. However, none of this is captured on the employee evaluation forms that were used to decide which resumes would be used to train the automated recruitment tools. Computer scientists simply pull the resumes of employees with the highest performance rates within each role. But, those resumes clearly don’t show the full picture. And they propagate the status-quo, and all of the inherent biases that come with it.

This is why data scientist Cathy O’Neil argues the statistical models produced by algorithmic decision making systems are simply opinions written into code. She argues that we should not assume training datasets are accurate or impartial, because they are encoded with the biases of their largely white, male producers. This is what legal scholar Rashida Robinson calls dirty data.

Why is this so dangerous?  Because the decisions made using dirty data are fed back into the training datasets and are then used to evaluate new information. This could create a toxic feedback loop, in which decisions based on historical biases continue to be made in perpetuity.

How businesses can reduce bias in training data

One of the groups that have been thinking about the impact of bias in training data is the Chief Executives for Corporate Purpose (CECP), a coalition of 200 CEO’s from the world’s leading companies. In 2018, they published AI For Good: What CSR Professionals Should Know, a report that argues that corporate social responsibility (CSR) teams should be collecting social impact data on their target populations to counteract the biases that may be expressed by AI systems. However, some industry leaders feel that approach does not go far enough. In an interview with CBS News, Salesforce CEO Marc Benioff advocated for a national data law that would improve the quality of training data.

This is also an approach being considered by Congress.  In June, I was part of a team that introduced the Algorithmic Accountability Act to the U.S. House of Representatives, which would force companies to audit AI systems for bias before using them in their processes. This is a first step in the governance of AI systems. Currently the inputs to algorithmic decision-making systems are protected by intellectual property laws, but this would make this code subject to an FDA-type of review. In the absence of knowing how algorithmic inputs are weighted, we can only make inferences from the outputs as to whether AI systems are expressing racial bias and why. For example, the COMPAS algorithm, which is used widely in the U.S. criminal justice system to assess recidivism rates, was found to consistently give black defendants longer jail terms than their white counterparts.

The proposed regulations are helpful interventions, but they do not provide an immediate solution to our AI bias problem. There’s an opportunity here for businesses that want a first-mover advantage in differentiating themselves in the marketplace by using fair and accurate AI. These companies could hire critical public interest technologists — teams made up of computer scientists, sociologists, anthropologists, legal scholars, and activists — to develop strategies to develop more fair and accurate training data. These teams are charged with conducting research that can help advise CSR groups on how to make strategic investments with groups working to reduce the expression of racism, sexism, ableism, homophobia, and xenophobia in our society. This would reduce these biases being encoded into datasets used in machine learning, and would in turn produce more fair and accurate AI systems.

Reducing bias in training data will require a sustained, multi-pronged investment in the creation of a more just society. And the companies who are currently advertising these values should be doing more to stand behind them.

اترك تعليقا

Your email address will not be published. Required fields are marked *

Related Articles