Huber loss / np.where
Huber loss:
- Usually set
- Huber loss is less sensitive to outliers (those
such that ) than the MSE - And converges faster than the mean absolute error (because of larger gradients).
A python implementation:
def huber_fn(y_true, y_pred):
# Suppose delta == 1
error = y_true - y_pred
is_small_error = tf.abs(error) < 1
squared_loss = tf.square(error) / 2
linear_loss = tf.abs(error) - 0.5
return tf.where(is_small_error, squared_loss, linear_loss)
If all the arrays are 1-D, np.where(condition, X, Y)
is equivalent to:
for (c, x, y) in zip(condition, X, Y):
yield c ? x : y
Comments