Skip to main content
Industry

Securing AI and ML projects: Data and cyber risk management

As Artificial Intelligence and Machine Learning continue to cement themselves as foundational resources for growth and transformation across the financial services industry, organizations must account for the added influx of data flooding into their enterprises. Every data science achievement must also account for how we secure and protect that data, making embedded data security risk management a far more pronounced need than it has ever been.

The tech boom of the previous decade has introduced a professional tension between the aspirations of today’s data scientists and the pragmatic structure of seasoned security professionals. How can businesses innovate securely? How can data scientists and risk management teams collaborate to deliver a stronger product? It’s a delicate balance between safety and ambition, and one that business leaders must account for every day. Fortunately, we have a few recommendations for business leaders looking to seed strong cyber risk management across their AI and ML project lifecycles.

  1. Start with culture

For decades, security teams have supported the needs of their enterprises and during that time, they’ve become quite good at understanding the necessities, processes, and challenges IT infrastructure and line of business owners face. As a result, security professionals have built assessments and frameworks that support the platforms and infrastructure they’re intimately familiar with, but data science and AI introduce a brand-new landscape. Enterprise security and risk teams are entering a space where they don’t have those same historical experiences to draw from and, as a result, are initially circumspect. The key to helping traditional security teams feel more comfortable with AI solutions lies in cultivating a culture of mutual engagement where cybersecurity leaders and their teams can collaborate directly with data scientists and privacy officers. These collective environments enable risk managers and cyber defenders to better understand a day in the life of a data scientist, what their priorities are, what challenges they experience, and how critical data is sourced. This ability to empathize with the creators and the consumers of AI-enabled services equips security professionals with the context they need to understand the applications of AI and ML while also helping them better anticipate the types of threats they may be exposed to or inheriting.

  1. Secure executive support

Collaborative cultures start at the top of any organization. AI and machine learning ambassadors should have project buy-in from the Chief Information Security Officer, Chief Risk Officer and Data Protection Officer at the beginning of every AI or machine learning effort. By aligning these stakeholders, AI teams can better connect with security and IT to ensure compliance and security while avoiding the project delays that come due to introducing security and compliance at the last minute. Instead of delivering a near-complete asset to a security team that lacks the project context and skilled expertise to support the data scientists’ ambitions, each team can now work together to help mitigate vulnerabilities and manage threats.

  1. Adopt data-centric and intelligence-driven security models

Data-centric and intelligence-driven security models provide risk management and compliance across the traditional line of business portfolio and advanced data science projects. Specifically, data ought to enrich and validate our methodologies behind operational procedures and technical controls, including:

Data control

Protect sensitive data regardless of location, device type, hosting model, or user population.

How?

  • Inventory and classify data across networks, devices, and apps
  • Encrypt data in-flight and at rest in any application, device, or network
  • Enforce access control across user populations, apps, and devices
  • Apply and enforce declarative policy dynamically via APIs

Threat intelligence

Combine real-time analysis and visibility with contextual information to identify threats, address vulnerabilities, and uncover incidents in progress.

How?

  • Real-time analysis and visibility across networks, devices, apps, users, and data
  • Contextual information like location, time, and behavior, about all entities, transaction risk, and overall security state

This cross-analysis of data science and security further emphasizes the symbiosis between security and data science in an AI/ML-driven market. Security needs data science to test and validate its models, while data science relies on security to help defend the quality and integrity of its efforts.

  1. Privacy is a priority

As the focus on privacy continues to grow, understanding the value and the history of owned or stored data is as important as the data itself. Assess the regulatory compliance and privacy risks based on the collection, transfer, storage, and processing of personal data of individuals, and use those learning to fashion more knowledgeable privacy standards.

Under the General Data Protection Regulation (GDPR), and other global privacy regulations, both data controllers and data processors are expected to comply with any transaction of data to support secure processing of personal data and protecting the privacy of individuals. When building AI and ML projects using Azure services, review privacy and security controls for personal data processors provided under the Azure accountability readiness checklist for the GDPR.

AI and machine learning will shape the future as we know it, but only if we can respect and protect our users. Managing risk, ensuring privacy, and exploring new innovations all come down to an alignment of culture. Organizations and leaders that best integrate their data science and security teams, validate their efforts with data, and respect the privacy of their users have a clear path to success as we continue on into this new era of transformation. If you’d like to learn more about new threat modeling, detection, mitigation and triage guidance, visit Threat Modeling AI/ML Systems and Dependencies, or, if you’re interested in Microsoft’s own engineering practices and culture, explore AI/ML Pivots to the Security Development Lifecycle Bug Bar.