3 ways organisations can use AI in a responsible way

Since I spoke at techUK’s Digital Ethics 2018 conference the conversation on AI has continued to grow. Research that we recently conducted showed that UK organisations have been increasing their adoption of AI technologies over the past year. The number of companies who now state they have an AI strategy in place has more than doubled – from 11% in 2018 to 24% today, with over half of the organisations reported to be using AI to some degree, indicating that AI is increasingly becoming more accessible.

Is responsibility keeping pace with accessibility?

The rise in AI technologies creates more urgency for organisations to understand the implications of AI empowered decision making and how to ensure AI is being used responsibly. However, many UK leaders lack an understanding of how AI can be used in a fair, responsible and effective way, with two-thirds (63%) not knowing how AI systems reach conclusions.

As AI expands and embeds itself further into daily life, this is a question that nearly every organisation will need to address – how to create responsible AI systems that their staff and customers have confidence in. Defining “Responsible AI” rather than “Ethical AI” captures the wider concepts and approaches that can drive shared responsibility across people, society, industry and government.

Societal values should be baked in

The public debate on the societal impact of AI cannot be ignored by those developing or implementing AI solutions. A new approach to this conversation is required to ensure that AI technologies are aligned with societal values, and that a regulatory regime that both protects citizens and encourages innovation is in place.

We should not approach the need for policy and regulation with fear or with the view that it will hinder technological development. In 1982 the then UK government appointed the leading ethicist Baroness Warnock to chair the Committee of Inquiry into Human Fertilisation and Embryology. The Committee brought together ethicists, scientists, religious and lay leaders and crucially the public, to consider what the rules should be around Invitro Fertilisation. The work of the Warnock Commission culminated in the 1990 Human Fertilisation and Embryology Act, which governs human fertility treatment and experiments using human embryos in the UK but has also shaped much of the global conversation and made the UK a world leading centre for fertility research. This demonstrates the importance of creating the right rules for technology to maintain public confidence and support innovation.

Being responsible and getting things right as the AI appetite grows

When it comes to AI, we have seen an increase in appetite from business leaders to be at the forefront of pioneering AI technologies – from only 14% in 2018, to 28% in 2019. This underlies the increased urgency in getting ahead with AI to enable successful business outcomes, but also created an increased urgency to ensure we get things right with AI. With all the truly amazing progress that has been made in AI over the past year, it is still important to remember that we are at a very early stage of truly understanding the magnitude of impact on our global society should this technology remain unchecked.

To ensure ongoing public trust in their brand, organisations must consider the long-term reputational and cultural benefits of moving beyond just discussing high level principles on the ethical use of AI and focus on what this means in practice when they implement and deploy AI. Regulators have an important role to play here as well – if they take a risk-based approach, that is focused on outcomes rather than technology in order to support innovation, for example anti-discriminatory regulation which is technology agnostic.

3 steps to ensure AI is served in a responsible way

So, where do we go from here to ensure that AI is serving our society in a healthy and responsible way? Organisations must think of AI technology in a holistic way – understanding where AI sits in the value chain and creating the right structures to ensure long-term governance by:

  1. Establishing internal governance, for example by an objective review panel, that is diverse and that has the knowledge to understand the possible consequences of AI infused systems. A key success factor is leadership support and the power to hold leadership accountable.
  2. Ensuring the right technical guardrails, creating quality assurance and governance to create traceability and auditability for AI systems. This is an important part of every organisation’s toolkit to allow operational and responsible AI to scale.
  3. Investing more in their own AI education and training so that all stakeholders – both internal and external – are informed of AI capabilities as well as the pitfalls.

Find out more

Download the report: Accelerating competitive advantage with AI

Ethical AI: 5 principles for every organisation to consider

Kate RosenshineAbout the author

Kate currently leads the Data and AI Cloud Solution Architecture team for Financial Services at Microsoft UK, helping organisations shape their data strategies in a scalable and responsible way. Her main focus lies in the intersection between technology and business, how data can shape organisations and AI systems. Prior to joining Microsoft, she worked at start-ups where she focused on leveraging big data and behaviour analytics to augment decision making. Kate comes from a background in scientific research, specialising in neurobiological genetic engineering. During her research, she studied the influence of genetics on behaviour and survival mechanisms. She holds a MSc in Molecular Biology from Bar Ilan University and a MBA from Tel Aviv University.