Instead of amplifying human biases, can algorithms help fix them?
Loading...
| AMSTERDAM
Nowadays computer algorithms help decide everything from who gets a loan to what a worker is paid. And experts say the algorithms are biased. Artificial intelligence, or AI, can amplify the patterns such as gender discrimination that persist in human society, because the machines learn from human-derived data. Francesca Rossi, IBM's global leader on AI ethics, put it bluntly at a recent conference in Amsterdam. “Without oversight and unless more women [and minorities] are involved in creating algorithms, this narrative of male domination will continue,” she said. Companies including IBM see a business opportunity in teaching algorithms to break free of bias. One New York company helps employers reset salaries based on data that are blind to workers’ age, race, or gender. Other firms use data to boost fairness in hiring. Still others aim to create report cards on AI tools. It’s all still a work in progress. But Jim Stolze, an expert on data ethics in Amsterdam, says attitudes are shifting. He says, “Now we expect our 'nerds' to also understand the societal implications of their work.”
Why We Wrote This
The rising use of artificial intelligence has become one of the dominant trends in business. On the heels of that trend come questions about fairness and a quest for ethical algorithms.
Alexa. Siri. The voice that responds when you say, “OK Google.” These virtual assistants rely on artificial intelligence. They are increasingly ubiquitous, and they are female.
So far no Martin or Harry or Alexander. Ever wonder why?
Kate Devlin, a technology expert and senior lecturer at King’s College, London, says it may stem from biases that can lurk deep in human thought, perhaps even unnoticed. She recounts how, when she asked a developer of one of the digital-assistants why he chose a female voice, his answer was, ”I didn’t really think about it.”
Why We Wrote This
The rising use of artificial intelligence has become one of the dominant trends in business. On the heels of that trend come questions about fairness and a quest for ethical algorithms.
In sharing that anecdote in at the recent World Summit AI in Amsterdam, Dr. Devlin wasn’t alone in focusing on the link between gender bias and the fast-growing realm of artificial intelligence. In fact one of the hottest questions surrounding the technology is how to grapple with the tendency of AI to reinforce human prejudice, or possibly even expand the problem in new ways.
In September, for example, the American Civil Liberties Union joined with other groups in a lawsuit alleging that Facebook “delivers job ads selectively based on age and sex categories that employers expressly choose, and that Facebook earns revenue from placing job ads that exclude women and older workers from receiving the ads.”
Whatever the outcome of that lawsuit, many experts agree the challenge called algorithmic bias is real. It’s something that companies and academic researchers are now actively working to address. Lessening the skew of algorithms is becoming a business in and of itself. But it’s no easy task, due in part to the still-nascent understanding of how AI actually works, the lack of a regulatory framework, and the fact that biases are so deeply entrenched.
“AI creators and developers are all men. And in the West, they have been white men,” Francesca Rossi, IBM’s global leader of AI ethics, said in a talk at the conference, offering her own answer on why the digital assistants have female voices. “The stories used to program AI are created by men. It’s a vicious cycle between AI development and deployment.”
To be fair, companies such as Alphabet-owned Google tout diversity on the teams behind their automated voices.
Yet the basic challenge remains. AI algorithms are already being used in deciding who gets a mortgage or a credit card, who gets a job, who universities admit, and who gets parole – not to mention influencing how you shop.
‘Garbage in, garbage out’
IBM is one of the companies trying to become a leader in building a new ethic into algorithms.
“Most current AI systems are not free of bias,” the firm says in a short online video promoting the issue. “But within five years, the most successful AI will be, and only those will survive.”
The video is accompanied by upbeat music. But breaking that vicious circle – in which AI can both feed off and influence flawed human behavior – won’t be easy.
Gender bias, for example, goes back thousands of years. It was there when Plato and Aristotle espoused concepts of intellectual meritocracy.
“The wise, intelligent men ruled with a kind of legitimacy,” says Dr. Rossi, who is also a professor of computer science at the University of Padua. “The less-intelligent [considered to include all women] were seen as being somehow less human, and this gave men the right to rule and justified actions such as slavery,” she adds. “Without oversight and unless more women [and minorities] are involved in creating algorithms, this narrative of male domination will continue.”
An old rule of creating computer software is: “Garbage in, garbage out.” For companies seeking to use computers to fight bias, the key is giving machines better instructions.
The firm CompIQ in New York, for example, started in 2016 as a software platform for benchmarking the fairness of pay and benefits for companies. The goal is to help an employer set a wage scale that’s not only internally fair but also in line with other companies in the same sector.
Data on such things as location, seniority, and performance review is run through various AI tools to produce a market pay range for each employee based on all these factors. Variables such as gender, race, and age are not entered into the equation, so the machine doesn’t “think” about them in making its decisions.
“Titles alone are a very imperfect proxy for responsibilities. Responsibilities are the gold standard for compensation,” says Adam Zoia, the founder and CEO of CompIQ. And there’s a side benefit: “Looking at responsibilities often identifies issues employers never knew existed regarding compensation discrepancies and who is really doing what as part of his or her job function.”
Fair gatekeepers?
In addition to salaries, the business world is already using AI (fallible as it may be) to combat bias in hiring and promotions – particularly in the tech and finance sectors, where lawsuits under Title VII of the Civil Rights Act have famously occurred.
One basic example: Companies including Johnson & Johnson say they’ve boosted the diversity of job applicants by using Textio, a data-driven tool, to word their job postings in more gender-neutral ways.
Other firms such as Plum, Pymetrics, and Hirevue aim to help employers make data-based and bias-free choices as they sift through job applications. For instance, job applicants play some simple online games, which test for traits and aptitudes – again shifting the focus away from things like gender. The AI software then looks at whether those traits match those of successful people in jobs like the one that’s being filled.
Boosters say such tools promise a world where people are better matched with jobs, and with less chance that good candidates never rise to the attention of humans, who still do the final hiring.
Critics say such efforts themselves warrant careful monitoring, including perhaps third-party audits to see if they are as successful in eliminating bias as they claim. Some also worry that the approaches might codify new kinds of bias, based on profiles of who will be a “good fit” at a job.
In fact, researchers have so far identified and classified more than 180 human biases.
Paul Ohm, a law professor at Georgetown University, is looking into this issue with a research grant called “Playing with the Data” from Paris-headquartered AXA Insurance. His work focuses on screening the output of AI algorithms to identify and then rectify discrimination and bias, but the bias and its sources are not always obvious. To dig deeper, algorithms need an accountability function.
“If a computer program determines that I do not qualify for credit, at the very least it should be able to identify the critical factors that led to this decision,” he explains. Mr. Ohm believes such advances in transparency are coming through current research. “For example, a system might be able to say, ‘You would have qualified for this loan if your income was 10 percent higher and you closed one line of credit.’ ”
Meanwhile, Rossi and fellow IBM researcher Biplav Srivastava have outlined ways to evaluate deployed AI systems even if the data that “trained” the machine are not available.
The research proposes an independent, three-pronged rating system to determine the relative fairness of an AI system: “1) It’s not biased, 2) It inherits the bias properties of its data or training, or 3) It has the potential to introduce bias whether the data is fair or not.” These criteria are designed to assist the AI end-user in determining the trustworthiness of each system.
Some experts are hopeful that, as research into AI systems discovers how human beings make decisions, we can also then identify our human biases more precisely and embrace more egalitarian values.
At a minimum, a successful future will depend upon software engineers taking responsibility for their products and keeping track of where their data come from.
“In all the years that computer science majors went to university, nobody educated them for this philosophical issue,” says Jim Stolze, founder of Aigency, an agency for artificial intelligence, who also teaches data science and entrepreneurship at Amsterdam Science Park and is a board member of “AI for Good” in the Netherlands.
“The ethics were part of the philosophy faculty, miles away from the informatics,” he says. “Only now we expect our ‘nerds’ to also understand the societal implications of their work.”