-
-
Notifications
You must be signed in to change notification settings - Fork 23
Loss functions
loss functions written below are provided as default by dannjs, see how to add more
These functions are represented below with yhat
being the dannjs model predictions and y
being the target values. The value n
represents the length of the target & input list.
Binary Cross Entropy Loss. This function is common in machine learning especially for classification tasks.
Definition:
Mean Squared Error, this is one of the most commonly used loss functions in deep learning. This function determines a loss value by averaging the square of the difference between the predicted and desired output. It is also the default value for a Dannjs model.
Definition:
Mean Cubed Error, this is an experimental function. The aim is to let the loss value have more gradient with near 0 values, cubing a number can output a negative value this explains the |x|
.
Definition:
Root Mean Squared Error, this function is the root of an mse output.
Definition:
Mean Absolute Error, this function determines the loss value by averaging the absolute difference between predicted and desired output.
Definition:
Mean Bias Error, this function determines a loss value by averaging the raw difference between the predicted and desired output. The output of this function can be negative, which makes this function less preferable than others.
Definition:
Log Cosh Loss, this function determines a loss value by averaging the of the difference between the predicted and desired output.
Definition: