Please enable JavaScript.
Coggle requires JavaScript to display documents.
Neural Networks and Deep Learning (W2) - Coggle Diagram
Neural Networks and Deep Learning
(W2)
Logistic Regression
wTx+b (w- weight ( one per feature), x- feature, b - bias (scalar))
Sigmoid (fit under 0 & 1)
Loss function (Identify the loss using y and yhat) Also called as cost function
Derivative (slope) Identify the slope for w & b at a given point
Using learning rate, update w & b and calculate cost
Formula
Activation z=wTx+b
yhat = sigmoid(z)
loss/cost=-average((ylog(yhat)-(1-y)log(1-yhat))
sigmoid=1/(1+exp(-z))
derivate/grads
b=average(y-yhat)
w=average(X(y-yhat)T)
optimize
w=w-lr*derivative(w)
b=b-lr*derivative(b)
Shapes
x(features, examples)
X = np.array([[1., -2., -1.], [3., 0.5, -3.2]])
2 Features with 3 examples
b - scalar
b = 1.5
w(features,1)
w = np.array([[1.], [2]])
2 features
dw(features)
dw = [[ 0.25071532] [-0.06604096]]
db - scalar
db = -0.12500404500439652
Cost - scalar
cost = 0.15900537707692405
y(1, examples)
Y = np.array([[1, 1, 0]])
Steps
Prepare Data
Normalize or Generalise data. Usually divided by 255 for image data
Get Functions
Sigmoid
2.Activation
Loss/Cost
Derivative
Optimize
Calculate Cost, Derivative
Optimize with different learning rate, iterations and optimize function
Avoid overfitting(Training outperforms test)