Skip to main content
Industry

Diversity, inclusion, and Responsible AI are now the bedrock of bias prevention

“Diversity expansion can then prevent bias. Only through education can organizations understand the critical need to hire a diverse talent base. It may not be possible to have what some call an “unbiased brain”, but it is possible to bring diverse brains to the table to help prevent it.” —Christian Thilmany, Director, AI Strategy, Microsoft

Artificial intelligence (AI) is here, and with it comes the responsibility of understanding its greatest opportunities and challenges that come along with its application. The most important piece to applying AI is to ensure that you are implementing it responsibly. Responsible AI practices play a fundamental role in minimizing prediction bias that can potentially hurt a business, its employees, and its customers.

Machine learning plays a key role in AI bias. For those new to AI, machine learning includes systems that automatically learn and automate human processes without being continually programmed to do so. However, AI can only know what you tell it. Machine learning (ML) bias occurs when an algorithm’s output becomes prejudiced due to false assumptions in the process that are based on the data that goes into it. This can impact anything from creating dangerous issues for autonomous vehicles to favoring a lack of diversity, excluding traditionally marginalized groups. An example of bias in the medical field might be that an algorithm may only recognize doctors as male and not female, or even exclude minorities. AI “machines” are biproducts of the human decision making that goes into the data and technology used to build those solutions. So, how do you prevent this kind of machine learning bias from happening? It takes extensive education on the ethical, technological, and human influences of this emerging science.

Understanding how these technologies leverage data is a key element of bias prevention. Recently, one of Microsoft’s cloud competitors built a recruiting tool to help automate the hiring of prospective employees by scouring the internet. As part of that effort, it trained its AI solution on historical patterns of the candidates it thought would be interested in working for the company. Unfortunately, the company had to scrap the system as it favored men applicants over women. AI itself wasn’t to blame, but the data that was fed into the system. A lack of inclusive data will usually skew results. Data should always be a balanced representation of what the organization is trying to model. It must include all races and genders, especially when its use will impact human beings. Data is not the only issue, however. It’s important to be aware of the ramifications of the AI technology and be extremely thoughtful of not only the data, but the algorithms used to train models, particularly when using deep learning.

Deep learning is one of the hottest technologies to hit the data science field, but its implementation can unfortunately create skewed predictions if data scientists aren’t mindful. Like all other ML algorithms, deep learning also uses training data to systematically determine patterns and make predictions. It is a transformational technology, but it suffers from the same challenges that plague other algorithms. Unless the right training data and algorithms are applied, it will usually always lead to bias.

Deep learning can be a very powerful tool, but its algorithm presents another major challenge. When running deep learning models, it can be very difficult to determine why those predictions were made. How does the engineer determine whether the data they used in the model is correct? When the deep learning model is fed information, it flows into a convoluted and complicated set of layers that make it very difficult for humans to track. Because deep learning is modeled after the human brain and neurological patterns, the process can be extremely nuanced, and bias can creep in at many of the internal layers. Just like with the brain, determining how decisions are made becomes incredibly complex. To mitigate these challenges, Responsible AI education is the first step in bias prevention.

“Explainability,” as it is referred to in the field, is essentially a North Star for anyone wishing to use deep learning. Explainability can help provide transparency into and drive how decisions are internally made. By creating algorithms that help in identifying the decision matrix, you potentially identify the bias. Fortunately, new technologies are quickly emerging that allow us to determine exactly what decisions were made inside the “black box.” The Explainability Boosting Machine (EBM), is one such algorithm that was developed by Microsoft Research to prevent AI bias through producing interpretable explanations. However, Microsoft isn’t the only company in this fight. Other Explainability tools have emerged to help identify biased data even before it even needs to be explained. Making sure data scientists understand and utilize these newer technologies is paramount to the practice of Responsible AI and preventing bias.

Even with the best technology, AI can still be biased. Why? Humans can also be a driver of bias. The data and technologies that are used in machine learning are still driven by humans. When a human is biased, the machine will also be biased. That doesn’t mean that data engineers and data scientists building AI systems are ill intentioned, but they may be negligent. Bias doesn’t have to be purposeful. It can occur simply from the fact that the researchers are typically male, often Caucasian, and have typically high socioeconomic backgrounds. Because of that, they become a very homogeneous group. That homogeneity then becomes the basis for the data, the process, and decision making. This can then lead to bias and worse, the team responsible won’t even know why. How can businesses be fair or ethical when they do not even know that they are biased in the decisions they make? Fortunately, there are many opportunities to diversify the talent pool of those building these AI systems. Diversity expansion can prevent bias, so it’s critical to hire a diverse talent base. It may not be possible to have what some call an “unbiased brain,” but it is possible to bring diverse brains to the table to help prevent it. And to create true talent diversity, to get a clear understanding of the right hires that forward fairness and ethics, requires coming full circle back to Responsible AI.

Given all the social unrest in the world, it seems as though there is a tremendous opportunity to address the need for Responsible AI and to do it systematically. By educating more than just IT teams, society can ensure Responsible AI is taught to non-technical domains and the human beings that influence these systems. A diverse workforce will not only help drive unbiased inputs into artificial intelligence, but they can also be part of the technical decision-making process before and during execution. This provides the opportunity for individuals to make unbiased decisions with explainable technology, providing pathways to transparency. Only by educating your organization of the need for Responsible AI and contributing to its application, will we reduce, if not eliminate, AI-driven bias.

To learn more about and explore what’s possible in the realm of Responsible AI, please visit the Microsoft Responsible AI Resource Center.