Breadcrumb
  1. Training
  2. Education & Training Catalog
  3. Learn Prompting
  4. AI Red-Teaming and AI Safety: Masterclass

AI Red-Teaming and AI Safety: Masterclass

#1 AI Safety Course. Learn AI Security from creator of HackAPrompt, the Largest AI Safety competition ever run (backed by OpenAI & ScaleAI)

Course Overview

Overall Proficiency Level
3 - Advanced
Training Purpose
Skill Development
Specific Audience
All
Delivery Method
Online, Instructor-Led
  • Online, Instructor-Led

Learning Objectives

By the end of this course, you'll be able to identify and exploit AI vulnerabilities through hands-on red-teaming projects, giving you practical experience in uncovering and mitigating threats like prompt injections and jailbreaks. You will also learn to design and implement robust defense mechanisms to secure AI/ML systems throughout the development lifecycle. Finally, you'll collaborate with top industry experts and a community of peers, culminating in a final project where you apply your newfound skills to protect a live chatbot or your own AI application.

Framework Connections

The materials within this course focus on the NICE Framework Task, Knowledge, and Skill statements identified within the indicated NICE Framework component(s):

Feedback

If you would like to provide feedback on this course, please e-mail the NICCS team at NICCS@mail.cisa.dhs.gov. Please keep in mind that NICCS does not own this course or accept payment for course entry. If you have questions related to the details of this course, such as cost, prerequisites, how to register, etc., please contact the course training provider directly. You can find course training provider contact information by following the link that says “Visit course page for more information...” on this page.

Last Published Date: