This digital whitepaper describes Zoom AI Companion’s security and privacy features as of the date of publication and not other AI products or services offered by Zoom. In our continuing commitment to empowering productivity — while keeping security and privacy at the core of our products — the features described in this paper may evolve.
Generative AI Model Security
In addition to the steps outlined in Zoom’s secure SDLC above, models hosted by Zoom are subject to security reviews to assess security threats specific to generative AI models. The generative AI model review includes commonly known LLM model vulnerabilities, in line with OWASP’s Top 10 for LLMs and other secure AI frameworks. Vulnerabilities identified in the generative AI security reviews must be remediated in accordance with Zoom’s vulnerability remediation standards.
Zoom’s third-party subprocessors are subject to security assessments on at least an annual basis as part of Zoom’s third-party risk management program. Zoom’s third-party risk management controls are assessed by independent audit firms as indicated in Zoom’s security certifications and attestations, which are available to customers on Zoom’s Trust Center.