6 ways leaders can build responsible AI and data systems and the tools that can help

A doctor and developer in front of an fMRI brain image. Responsible AI is important in the medical industryThe power of AI and data to help us solve some of the world’s biggest problems is undeniable. For organisations, it helps them deliver better customer experiences, drive innovation, or free up employees to focus on value driven work. However, responsible AI is an important factor for trust and innovation. According to Capgemini nearly nine out of 10 organisations have experienced an ethical issue around AI. We’ve all seen the media reports about bias algorithms in employment, criminal justice and more.

To build and maintain trust with citizens we – as a data community – have an obligation to address these ethical issues. Previously, I’ve talked about how to build and effective data strategy and culture. A critical aspect of both strategy and culture is to ensure the ethical and responsible use of AI and data. We need to empower organisations to use data with a sense of responsibility. The EU recently released their Artificial Intelligence Act, the first legal framework for AI. In it, they take a risk-based approach to protect EU citizen’s rights while ensuring they can still foster innovation. As we saw with GDPR, the AI Act includes fines for infringements of up to four percent of global annual turnover (or €20M, if greater). Therefore, it is more important than ever to focus on the responsible use of data and AI.

Build your responsible AI strategy with the right question

A female developer working on responsible AI projects

Are you using AI technology to do the right things? Is it answering the right problems in the right way? AI shouldn’t be implemented because it’s a shiny new piece of technology. It should be used to help solve a problem. And to work properly, it needs to reflect the community you serve. To do this you need to build your data and AI solutions on ethical principles that put people first.

At Microsoft, one of my focusses as Chief Data Officer (CDO) is to ensure our use of data and AI remains ethical and responsible. What I have found is this is just as much a culture shift as much as a technological process. In a recent webinar, when I spoke with other data leaders across the industry, they also agreed.

What was clear across the board is that organisations need to take a very practical approach to responsible data and AI principles. Below are six principles that organisations can use to build their own responsible AI governance.

1.      Fairness

Although our society is diverse, it is unfortunately unfair and bias. It is our role to ensure that the systems we develop and deploy reduce this unfairness. However, fairness doesn’t just relate to the technical components of the system. It also about the societal context in which it is used.

“Ensuring the biases are taken care of is important. We think about how data is being increasingly used across platforms and avoiding any disproportional impact as a result,” says Sudip Trivedi, Head of Data and Analytics at London Borough of Camden.

How can leaders ensure fairness? We need diverse teams that question the data and models we are using at every step along the journey. We need to think critically about the implications and unintended consequences more broadly. Having checklists to continually monitor data and AI processes is a great way to ensure we stay fair. Leverage tools and learnings to validate fairness regularly.

Fairness tools:

AI fairness checklist

Datasheet fairness checklist

Fairlearn open-source toolkit

2.      Inclusiveness

A team of developers have a meeting outside.

Our aim at Microsoft is to empower everyone to achieve more. We are intentionally inclusive and intentionally diverse in the paths we take. AI needs to be built with everyone in mind. Because when you design solutions that everyone can access, the data you collect will be fairer.

This is where your diverse organisation becomes a huge benefit to you. By ensuring that your data and AI teams are diverse you will be building for everyone. And don’t forget to include a diverse audience for your testing to ensure that your systems remain accessible for all.

“It takes having that diversity within your organisation or stakeholder group to spot issues,” says Nina Monckton, Head of Data Strategy, Advancing Analytics & Data Science at AXA Health.

Inclusive tools:

Inclusive design guidelines

Design with accessibility in mind

3.      Reliable and safe

Our data and AI processes need to be consistent with our values and principles. As owners of these models, we need to continuously check that they’re not causing harm to society. And if they are, we need to have processes to fix them. We’re also transparent with our users on these issues.

Building reliable and safe AI isn’t limited to just physical systems that affect human life. For example, self-driving cars or AI in healthcare. It’s also about ensuring that every model you create stays reliable and safe no matter how big it gets or how many people work on it.

Reliable and safe tools:

Accelerate the pace of machine learning while meeting governance and control objectives with MLOps

Preserve privacy with Project Laplace

4.      Transparency

Transparency can help us reduce unfairness in AI systems; it can help developers debug systems, and it helps us build trust with our customers.

Those who are creating the AI systems should be transparent about how and why they’re using AI. They should be open about the limitations of their systems. People should also be able to understand the behaviour of AI systems.

“Being transparent is critical to doing good data work. If you don’t have the transparency, it’s very difficult to know if it’s doing its job well,” says Daniel Gilbert, Director of Data at News UK.

To truly understand AI, we need to democratise through digital skilling. This is not just within your organisation, but within society too. We need to work together to help encourage skills growth across our communities with digital skilling programmes. This will help further increase diversity in our organisations as we introduce people to the opportunities of technology careers.

“A lot of the data we are collecting and using are from people who are digital literate. There’s a real hard question: Is the data we’re collecting really representative of the people we’re trying to provide services for?” says Nina.

Transparency tools:

Microsoft Learn

Improve digital skills

Bridging the digital divide

5.      Privacy and security

Cybersecurity defence force. Cyberpeace is an important part of humanitarian action.

Privacy is a fundamental right, and it must be built in to all our systems and products. With AI, machine learning and the reliance on data, we add new complexities to those systems. This adds new requirements to keep systems secure and to ensure data is governed and protected.

You must think about where and how the data is coming from. Is it coming from a user or a public source? How can your organisation prevent corruption and keep the data secure?

Privacy and security tools:

Learn about confidential computing 

6.      Accountability

As leaders, we are accountable for how our systems impact the world. Let’s look at facial recognition. There’s a lot of good uses for it, but only if we stick to principles that guide on how we develop, sell, and advocate for regulation on facial recognition.

Accountability includes internal and external factors. We need to keep key stakeholders informed across the whole cycle of AI systems. And we need to ensure we stay accountable to society.

Mahesh Bharadhwaj, Head of Europe Analytics at Funding Circle talks about asking the right questions at the right time: “Are we using the AI to do the right things? Do we check the models are being built correctly? Are we making sure the model is being deployed on the context it is built?”

Accountability tools:

Explore interaction guidelines 

Responsible AI builds trust

To build trust, a balance between culture and data capabilities is key. We need to make sure we are encouraging people to leverage data in ethical and responsible ways. These six principles should help you build AI-systems while building a diverse and inclusive culture. By doing this, we will ensure we’re serving our community in the best way possible.

Find out more

Discover our approach to responsible and ethical AI

Build a modern data strategy

Resources to empower your development team

Register for Microsoft Build on 25-27 May 

About the author

Robin Sutara, a woman with dark brown long hair smiles at the cameraAs an advocate of data-driven decisions, Robin has spent over two decades at Microsoft ensuring organisations have the tools to leverage the zettabytes of data available today to achieve their digital transformation vision.

Microsoft has been on its own digital transformation journey for several years and data has been a central part of that journey. Robin focuses on creating a data-driven culture across the business at Microsoft. This includes ensuring that we are considering data across our internal processes, as well as how we are helping our customers and partners succeed with data.

Robin is passionate about learning and collaborating with our customers and partners about how to truly leverage data and AI to create new solutions.

Prior to working at Microsoft, she served in the US Military. She strives to bring her best in all aspects of work and personal life. From obtaining two law degrees and multiple professional certifications – all while working full time, parenting her daughters and balancing personal commitments (including training for an IronMan), she believes anything is possible.