C

CourseWWWork

13 Followers
    48.27 Adam Optimiser Krish Naik ML
    6:36
    48.28 Exploding Gradient Problem Krish Naik ML
    10:11
    48.30 Dropout Layers Krish Naik ML
    12:10
    48.29 Weight Initialisation Techniques Krish Naik ML
    11:57
    48.26 RMSPROP Krish Naik ML
    6:17
    48.2 Why Deep Learning is getting Popular Krish Naik ML
    12:45
    48.4 Advantages and Disadvantages of Perceptron Krish Naik ML
    6:50
    48.3 3 - Perception Intuition Krish Naik ML
    18:20
    48.9 Sigmoid Activation Function Krish Naik ML
    7:59
    48.7 Chain Rule of Derivatives Krish Naik ML
    11:03
    48.11 Tanh Activation Function Krish Naik ML
    6:54
    48.14 ELU Activation Function Krish Naik ML
    4:07
    48.13 Leaky Relu and Parametric Relu Krish Naik ML
    4:35
    48.5 ANN Intuition and Learning Krish Naik ML
    21:21
    48.10 Sigmoid Activation Function 2.0 Krish Naik ML
    13:49
    48.15 Softmax For Multiclass Classification Krish Naik ML
    11:35
    48.12 Relu activation Function Krish Naik ML
    10:52
    48.8 Vanishing Gradient Problem and Sigmoid Krish Naik ML
    21:26
    48.6 Back Propogation and Weight Updation Krish Naik ML
    19:56
    48.16 Which Activation Function To Apply When Krish Naik ML
    5:22
    48.17 Loss Function Vs Cost Function Krish Naik ML
    6:51
    48.20 Which Loss Function To Use When Krish Naik ML
    3:42
    48.18 Regression Cost Function Krish Naik ML
    16:35
    48.25 Adagard Krish Naik ML
    8:15
    48.21 Gradient Descent Optimisers Krish Naik ML
    12:12
Rumble logo