• Online, Instructor-Led
Course Description

Learn about prompt hacking that exploits the hidden vulnerabilities in Large Language Models (LLMs) and how to protect your AI applications from prompt hacking threats. This course equips you with the knowledge to identify risks, execute ethical prompt injections, and implement robust defenses.

Learning Objectives

Explore security risks in AI and learn techniques for prompt hacking.

Framework Connections

The materials within this course focus on the NICE Framework Task, Knowledge, and Skill statements identified within the indicated NICE Framework component(s):