The goal of this new course is to address the important problem of specifying, developing, and testing software systems that are based on artificial intelligence (AI) components.
Learning Objectives
Analyze tradeoffs for designing production systems with AI-components. Analyze qualities beyond accuracy such as operation cost, latency, updateability, and explainability. Implement production-quality systems that are robust to mistakes of AI components. Design fault-tolerant and scalable data infrastructures for learning models, serving models, versioning, and experimentation. Reason about how to ensure quality of the entire machine learning pipeline (it should be noted that some of the following topics are still open research questions) with test automation and other quality assurance techniques, including automated checks for data quality, data drift, feedback loops, and model quality. Build systems that can be tested in production and build deployment pipelines that allow careful rollouts and canary testing. Consider privacy, fairness, and security when building complex AI-enabled systems. Communicate effectively in teams with both software engineers and data analysts
Framework Connections
The materials within this course focus on the NICE Framework Task, Knowledge, and Skill statements identified within the indicated NICE Framework component(s):
Competency Areas
Feedback
If you would like to provide feedback on this course, please e-mail the NICCS team at NICCS@mail.cisa.dhs.gov. Please keep in mind that NICCS does not own this course or accept payment for course entry. If you have questions related to the details of this course, such as cost, prerequisites, how to register, etc., please contact the course training provider directly. You can find course training provider contact information by following the link that says “Visit course page for more information...” on this page.