• Classroom
  • Online, Instructor-Led
  • Online, Self-Paced
Course Description

The goal of this new course is to address the important problem of specifying, developing, and testing software systems that are based on artificial intelligence (AI) components.

Learning Objectives

Analyze tradeoffs for designing production systems with AI-components. Analyze qualities beyond accuracy such as operation cost, latency, updateability, and explainability. Implement production-quality systems that are robust to mistakes of AI components. Design fault-tolerant and scalable data infrastructures for learning models, serving models, versioning, and experimentation. Reason about how to ensure quality of the entire machine learning pipeline (it should be noted that some of the following topics are still open research questions) with test automation and other quality assurance techniques, including automated checks for data quality, data drift, feedback loops, and model quality. Build systems that can be tested in production and build deployment pipelines that allow careful rollouts and canary testing. Consider privacy, fairness, and security when building complex AI-enabled systems. Communicate effectively in teams with both software engineers and data analysts

Framework Connections

The materials within this course focus on the NICE Framework Task, Knowledge, and Skill statements identified within the indicated NICE Framework component(s):