6 learnings from our ethics in AI meetup

Blogger Series graphic showing AI being used in Agriculture

Almost every single day, stories about AI dominate the news headlines. From driver-less cars to workplace transformation and helping teams achieve more by working smarter together. But for us, it’s not just about the technology.

AI does some amazing things. However, it’s essential that we don’t fall into the trap of making things better and faster without considering the consequences of our developments. After all, if we do nothing to ensure that AI is safe, aligned to human values, and free from bias, then it has the potential to do more harm than good.

It only seemed logical for us, as a leading technology company, to hold a meetup to share some guidance around how to implement bias-free AI and why it’s so important. Pratim Das, Head of Solutions Architecture, Data, and AI at Microsoft’s Customer Success Unit, and Ben Gilburt, Digital Horizon Lead at Sopra Steria, led some fascinating discussions about the ethics of AI and technology.

I wanted to share some of the learnings I took away with me from the event.

1. Biased data = biased results

Pratim talked about the importance of diverse and inclusive data. His key message was that training machine learning applications on biased data will get biased results.

He also listed the six key factors you must consider when designing AI:

AI framework listing the 6 key factors to consider: fairness, inclusiveness, reliability & safety, transparency, privacy & security, accountability

To go deeper into each of these areas and get some practical advice for how to build an ethical framework in AI, I’d strongly recommend you read Pratim’s blog.

 

2. Program AI to behave as we want it to, not how we tell it to

Group of men and women in a meeting

Ben continued this train of thought. He mentioned a few examples where technology failed to be neutral, such as inappropriate ‘recommended products’ as a result of online shopping algorithms. He talked about building AI with indirect normativity and coherent extrapolated volition. This takes advantage of AI to deliver outcomes we may not see. In simple terms, rather than building AI with our own desires which can be driven by selfish motivations, we should program it to behave how we would want it to behave.

Ben explains it like this: “Do what we would do if we were the type of people we wanted to be; if we had grown up together, and had convergent values.”

3. The need for diversity and inclusion in technology development

An algorithm is only as good as the data it has. Dr Allison Gardner, co-founder of Women Leading in AI, took us through the history of women in programming. Women were at the forefront of programming until it became a well-paid and attractive career. The lack of diversity in people who are creating our machine learning models and algorithms means that unconscious bias is present, therefore creating bias models.

Dr Gardner talked about how having a lack of diversity and inclusion at all stages of technology development results in unconscious bias. These biases are then at risk of exacerbating societal biases and embedding inequality in our systems.

“We need to be really honest about why the lack of diversity, particularly with women, has occurred. If we don’t, we are not going to change it,” she says.

It’s hugely important to change the culture around how computer science is taught and recruited.

“We also need to regulate the algorithm. We’re coming in with regulation, GDPR, and algorithmic impact assessments which will ensure that we have diversity,” she says. “

Dr Gardner’s session showed me that if we don’t actively think about diversity, our models have the potential to exacerbate bias in society.

4. Ethics built in by design

“We should work with people with different backgrounds and skills. This will give us a good chance of preempting any bias,” says Amy Boyd. “Do the proof of concept early, and test as widely as you can.” This was the advice Amy offered us. As a Cloud Developer Advocate in AI and Machine Learning, Amy has a lot of experience in dealing with data.

Amy talked about one of her projects where she analysed tweets to predict the winner of The X Factor each week. She tried to gauge positive or negative tweets using emojis. What she found, however, was that if she didn’t keep an eye on the data and monitor it, it would often produce bias results. If you concentrate on building non-bias models, you will ensure your data remains ethical, and produce better results for all.

5. Using Shakespeare to explore bias

William Shakespeare Engraving

Richard Potter, CTO of Microsoft Services, took the session down a more theatrical route. He used Shakespeare’s plays to demonstrate the different types of bias in AI.

Sounds weird, right? And why Shakespeare? Different stories represent different types of bias, which connect to real-world examples of bias in AI and data. To really bring this to life, Richard got the volunteers from the audience to act out scenes from the Bard’s most famous plays.

Twelfth Night represents pre-existing bias in gender stereotyping and a narrow world-view, which we still see today. What we learn from Twelfth Night is if we addressed this pre-existing bias by inclusive design and impact evaluations, we’d understand the whole picture better. We’d also reach our aims quicker and have better data as a result.

Technical bias from incorrect or non-complete data or a malfunctioning algorithm. And what better play than Hamlet to represent this madness? What Shakespeare is showing us is that the unsoundness of the mind is causing all sorts of chaos. Mix this in with the failure to learn from mistakes and you have a perfect example of technical bias. We can address this by ensuring our AI is well-tested and transparent.

The Bard’s final play, The Tempest, shows us emergent bias. We’re seeing manipulation of characters who then drift into poor outcomes. For example, AI can get manipulated by the very people who it’s supposed to help, like a chatbot who learns bad language from it’s audience. We can address this by ensuring we have ongoing measurement and operational accountability.

Richard’s presentation confirmed to me that bias is everywhere, even in Shakespeare. The end goal is ‘AI for all’ – fair and free of bias.

6. AI applied to specific industries

Two medical professionals exploring knee replacement hologram with Microsoft HoloLens for the purpose of surgery

AI is affecting every organisation. Therefore, ethics needs to be part of the conversation in every industry.

Udai Chilamkurthi, Lead Architect at Sainsbury’s, showed some of the latest AI technology being used in the retail industry. AI can be used to serve customers a great personalised, omni-channel experience. However, it’s important to ensure that this is done with ongoing measurement and with careful consideration. AI doesn’t have the social and emotional intelligence we have. For example, AI might recommend an inappropriate product to a customer a human would know not to.

And last, but certainly not least, was Chiara Garattini, Senior User Researcher at Public Health England. He spoke about AI in medical engineering. It’s incredibly important AI stays ethical in healthcare. It has great capacity to help accurately diagnose and treat patents. But it’s up to us to ensure it is bias-free and works for everyone.

It all comes back to the quality of the data

Interestingly, though none of the speakers had met before, they all came to the same conclusion: AI itself is not the problem, the data we feed it is the problem. Data is reflected by the real world, and if society itself is biased, then how can we hope for truly non-biased data?

Richard sums it up pretty well:

“In the end, it’s all about us. If we can only talk about AI in a technical language, we’ll never achieve what we need to achieve in this space. We need to go beyond our usual narrative forms and find new ways of telling stories to engage everybody in the development of the technology.”

This meetup event was co-hosted in London – Microsoft Data & AI (Pratim Das) and AI Ethics London (Ben Gilburt). To stay up to date on future events please join these groups:

London Microsoft Data and AI

AI Ethics London

About the author

Headshot of Holly Manley. Young girl with blonde hair, smiling at the camera.

Holly Manley is a Cloud Solution Architect Intern in the Customer Success Unit. She is currently undertaking a placement year as she’s midway through her Biomedical Engineering degree at Aston University. Holly is most interested in AI, anything related to women in tech, and loves attending meetups.