• Online, Instructor-Led
Course Description

This course offers a comprehensive overview of security concerns specific to generative artificial intelligence (AI), including both deep learning models like GPT (Generative Pre-trained Transformer) and DALL-E, and their applications. With a blend of theoretical knowledge and practical exercises, participants will learn about the vulnerabilities, ethical considerations, and mitigation strategies essential for deploying secure and responsible AI systems. The curriculum is designed to equip engineers and managers with the skills needed to identify potential security threats, implement robust security measures, and oversee the development of generative AI technologies with a strong emphasis on security and ethical integrity.

Learning Objectives

  • Understand the principles of Generative AI
  • Examine the security challenges specific to Generative AI systems
  • Learn best practices for securing Generative AI models
  • Explore ethical considerations in Generative AI security
  • Gain insights into the current landscape of Generative AI threats
  • Develop skills in identifying and mitigating security vulnerabilities in Generative AI applications
  • Apply cryptographic techniques for securing Generative AI models and data
  • Explore case studies highlighting security incidents in Generative AI
  • Develop a comprehensive understanding of the legal and regulatory aspects of Generative AI security
  • Collaborate on practical projects to implement secure Generative AI solutions

Framework Connections

The materials within this course focus on the NICE Framework Task, Knowledge, and Skill statements identified within the indicated NICE Framework component(s):