Security & Privacy AI Companion AI Trust & Safety

An introduction to AI safety and security

Our new series spotlights how and why you should implement generative AI safely and what Zoom is doing to create a safe and secure AI environment for our customers. 

3 min read

Updated on September 27, 2024

Published on September 16, 2024

CISO AI blog
Michael Adams
Michael Adams
Chief Information Security Officer

Michael Adams brings nearly 30 years of security and leadership experience as Zoom’s Chief Information Security Officer. Michael joined Zoom in August 2020 and served as Chief Counsel to the COO and CISO while building the company’s insider risk, global intelligence, operations assurance, and security legal programs. A graduate of the U.S. Naval Academy who began his career as an engineer, Michael was previously an advisor to two Chairmen of the Joint Chiefs of Staff, numerous prominent publicly traded and privately held companies, and the highest levels of the U.S. Government. He enjoyed success as an executive at Palantir and as a partner in a major international law firm. Michael and his wife, two children, and Chesapeake Bay retriever live in Charlotte, North Carolina where they are active members of the community and longtime, die hard Baltimore Orioles fans.

Artificial intelligence has entered the mainstream and helped us achieve new heights in efficiency through technology. This makes the advantages of AI seem practically limitless, giving our imaginations plenty of runway to reimagine what’s possible. 

While it’s fun to dream up the next great idea, implementing a new AI solution requires a strong commitment to safety and securing the data that drives it. We’re kicking off a new series on the Zoom blog, where we’ll discuss how and why you should implement generative AI safely and what Zoom is doing to create a safe and secure AI environment for our customers.

What is generative AI and how does it work?

AI can serve many different purposes, and generative AI gives you tools to generate new content including images, words, sounds, videos, and data through multiple inputs and outputs to AI models. Sometimes referred to as GenAI, generative AI goes beyond what’s humanly possible and uses various AI and machine learning algorithms to deliver instant results when prompted. In return, people can accelerate their work and save valuable time with generative AI tools for tasks such as drafting meeting summaries, sourcing images, or overcoming writer’s block with copywriting assistance. 

Generative AI solutions can be invaluable to the end user, freeing up time to focus on more meaningful work. But before you choose which AI tools to implement in your workflows, it’s important to consider a few things.

  1. Determine and outline the problems you want to solve. Are you trying to improve customer service? Simplify repetitive tasks? Assess new trends? Translate and localize content quickly? 
  2. Develop a timeline.How quickly are you able to explore, evaluate, and implement your AI options? What teams in your organizations need to sign off? What’s the cost of moving too slowly?
  3. Decide what matters to you.Different AI tools come with different benefits, drawbacks, and questions to ask the vendors you’re evaluating. What features are included in their services and how do they align with your goals?

In addition to these questions, it’s important to research how a vendor handles AI safety and security, and their privacy measures for implementing and using generative AI. We also recommend organizations and their end-users explore how data is collected and used to power the AI tools they want to implement.

What is AI safety versus AI security?

To begin with, it’s important to know how AI safety compares to AI security. AI safety and security are fundamental yet distinct aspects of the deployment and protection of AI systems, but specifically:

  • AI security is focused on safeguarding the confidentiality, integrity, and availability of data used in AI models and systems.
  • AI safety involves broader considerations related to robustness and reliability, ethical implications, long-term societal, economic, and environmental impacts, and impacts on human rights and values.

AI security at Zoom

AI security at Zoom

To overcome some of the security challenges that surface with AI integrations (namely the need to safeguard AI models, datasets, and training environments), there are a number of emerging guidelines, standards, and frameworks from respected institutions such as the National Institute of Standards and Technology (NIST), the National Cyber Security Centre (NCSC), and jointly: the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). Our approach to AI security aligns with these industry standards and leading practices and is designed to preserve the trust and confidence of our users by focusing on mitigating emerging threats. 

Our commitment to AI security is also integrated throughout the entire Zoom Secure Development Lifecycle (ZSDLC), encompassing secure supply chain management, model training, secure design, secure development, secure operation, and employee training. We’re incorporating AI considerations into our GRC (Governance, Risk, and Compliance) policies and risk framework, and also including security testing and research conducted by our Security Assurance team.

How Zoom approaches AI safety

Our approach to AI safety starts with the models and data we use to build our services. For Zoom-hosted models, we validate and manage our training data, and when selecting third-party vendors, we evaluate their safety procedures to ensure they align with our mission. Our evaluations include testing the models against standard safety metrics to validate common issues that can occur through model training. 

Account owners and admins have controls to manage the availability of AI features for their accounts, including user and group level controls that provide options for deployment. These options include, when appropriate, allowing for human review of outputs before being shared more broadly. Additionally, when using in-meeting features within Zoom Workplace (our open collaboration platform with AI Companion), the sparkle icon notifies you that AI is enabled and in use to help provide transparency for customers and participants. 

Here are three different ways we approach AI security and safety at Zoom:

  1. Protection of Zoom's AI products and services that utilize generative AI – such as Zoom AI Companion and Zoom Contact Center Expert Assist – and the underlying models that support them.
  2. Leveraging AI throughout our security program and practices to stay ahead of evolving threats.
  3. Securing Zoom's internal use of AI.

Managing safety and security alongside a federated AI approach

Zoom federated AI approach

At Zoom, we take a federated approach to AI, which means we apply the best large-language models for a specific task, including third-party AI models that customers are already familiar with. Customers can choose which features they use and whether they want to use Zoom-hosted models only, which is available for select features. This gives administrators more control over what’s available within their organization.

In line with our commitment to responsible AI, Zoom does not use any customer audio, video, chat, screen sharing, attachments, or other communications like customer content (such as poll results, whiteboard, and reactions) to train Zoom’s or third-party artificial intelligence models. For more information about how Zoom AI Companion handles customer data, visit our support page.

AI Companion, safety, security, and the future

While this initial discussion of AI safety and security just begins to scratch the surface, in the coming months, we’ll share more details about how we’re maximizing our efforts during the global shift to AI. We believe that AI is an incredible way to improve the way we work and that this is just the beginning. As we continue to release new features for AI Companion and Zoom Workplace, rest assured, AI safety and security are at the forefront of our development process. 

If you want to learn more about Zoom’s approach to privacy and security, join us for our upcoming webinar, titled Zoom’s Approach to AI Privacy and Security, on September 26, 2024. 

Our customers love us

Okta
Nasdaq
Rakuten
Logitech
Western Union
Autodesk
Dropbox
Okta
Nasdaq
Rakuten
Logitech
Western Union
Autodesk
Dropbox

Zoom - One Platform to Connect