• Online, Self-Paced
Course Description

In this course, you'll explore the concept of AI Explainability, the role of CloudOps Explainability in managing multi-cloud solutions, how to evaluate explanatory systems, and the properties used to define systems to accommodate explainability approaches. You'll look at how users interact with explainable systems and the effect of explainability on the robustness, security, and privacy aspects of predictive systems. Next, you'll learn about the use of qualitative and quantitative validation approaches and the explainability techniques for defining operational and functional derivatives of cloud operation. You'll examine how to apply explainability throughout the process of operating cloud environments and infrastructures, the methodologies involved in the three stages of AI Explainability in deriving the right CloudOps model for implementation guidance, and the role of explainability in defining AI-assisted Cloud Managed Services. Finally, you'll learn about the architectures that can be derived using Explainable Models, the role of Explainable AI reasoning paths in building trustable CloudOps workflows, and the need for management and governance of AI frameworks in CloudOps architectures.

Learning Objectives

{"discover the key concepts covered in this course"}

Framework Connections

The materials within this course focus on the NICE Framework Task, Knowledge, and Skill statements identified within the indicated NICE Framework component(s):

Specialty Areas

  • Systems Development