The "AI Trust Calibration Workshop" explores how humans develop, calibrate, and maintain appropriate levels of trust in AI systems. Participants will learn strategies to balance under-trust and over-reliance, which are critical in high-stakes environments like defense and cybersecurity. By mastering trust calibration, organizations can better manage AI decision-making under uncertainty, reducing risks of blind compliance or operator disengagement. This training equips cybersecurity teams to better evaluate AI-generated alerts, ensuring optimal human-machine collaboration and reducing false positives and missed threats in security operations.
None
The materials within this course focus on the NICE Framework Task, Knowledge, and Skill statements identified within the indicated NICE Framework component(s):
If you would like to provide feedback on this course, please e-mail the NICCS team at NICCS@mail.cisa.dhs.gov. Please keep in mind that NICCS does not own this course or accept payment for course entry. If you have questions related to the details of this course, such as cost, prerequisites, how to register, etc., please contact the course training provider directly. You can find course training provider contact information by following the link that says “Visit course page for more information...” on this page.