• Online, Self-Paced
Course Description

Learners can explore the machine learning concept and classification of activation functions, the limitations of Tanh and the limitations of Sigmoid, and how these limitations can be resolved using the rectified linear unit, or ReLU, along with the significant benefits afforded by ReLU, in this 10-video course. You will observe how to implement ReLU activation function in convolutional networks using Python. Next, discover the core tasks used in implementing computer vision, and developing CNN models from scratch for object image classification by using Python and Keras. Examine the concept of the fully-connected layer and its role in convolutional networks, and also the CNN training process workflow and essential elements that you need to specify during the CNN training process. The final tutorial in this course involves listing and comparing the various convolutional neural network architectures. In the concluding exercise you will recall the benefits of applying ReLU in CNNs, list the prominent CNN architectures, and implement ReLU function in convolutional networks using Python.

Learning Objectives

{"discover the key concepts covered in this course"}

Framework Connections

The materials within this course focus on the NICE Framework Task, Knowledge, and Skill statements identified within the indicated NICE Framework component(s):

Specialty Areas

  • Systems Architecture