0

I am using the custom loss function in addition to the mean squared error loss function in my Keras model. Code for the custom loss function is given below:

def grad1(matrix): 
    dx = 1.0
    u_x = np.gradient(matrix,dx,axis=0)
    u_xx = np.gradient(u_x,dx,axis=0)
    return u_xx

def artificial_diffusion(y_true, y_pred):
    u_xxt = tf.py_func(grad1,[y_true],tf.float32)
    u_xxp = tf.py_func(grad1,[y_pred],tf.float32)

    lap_mse = tf.losses.mean_squared_error(u_xxt,u_xxp) + K.epsilon()

I have the 1D CNN model.

    input_img = Input(shape=(n_states,n_features))

    x = Conv1D(32, kernel_size=5, activation='relu', padding='same')(input_img)
    x = Conv1D(32, kernel_size=5, activation='relu', padding='same')(x)
    x = Conv1D(32, kernel_size=5, activation='relu', padding='same')(x)
    decoded1 = Conv1D(n_outputs, kernel_size=3, activation='linear', padding='same', 
                     name='regression')(x)
    decoded2 = Conv1D(n_outputs, kernel_size=3, activation='linear', padding='same', 
                     name='diffusion')(x)

    model = Model(inputs=input_img, outputs=[decoded1,decoded2])
    model.compile(loss=['mse',artificial_diffusion], 
                  loss_weights=[1, 1], 
                  optimizer='adam',metrics=[coeff_determination])

When I compile and run the model, I get an error An operation has `None` for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.. If I create the model as model = Model(inputs=input_img, outputs=[decoded1,decoded1]), then there is no error. But, then I can't monitor two losses separately. Am I doing any mistake while constructing the model?

user3443033
  • 737
  • 2
  • 6
  • 21
  • the problem is with your loss function. In the code you have pasted, `decoded2` gets the `artifical_diffusion` loss function, which does not have a gradient defined. In the other case only `mse` loss is applied and therefore no error. I'd suggest to not use `numpy` methods in your loss function because they do not have an inherent gradient defined – learner May 14 '20 at 06:30
  • Thank you. Is there any function in tensorflow that computes the gradient (like numpy using finite difference) where I can define the axis? Or I will have to write my own function to do that? – user3443033 May 14 '20 at 15:13
  • what is it that you are trying to achieve? – learner May 14 '20 at 17:45
  • If we have a matrix A in numpy, we can con compute the gradient using dx = np.grad(A). Numpy does it using the finite-difference method (https://stackoverflow.com/questions/24633618/what-does-numpy-gradient-do). Now, I want to create a custom loss function that is based on the mean squared error between the gradient of ypred and ytrue. You can assume that there is only one channel of the image. So the dimension of ypred is [batch_size,nx,ny,1]. Thank you. – user3443033 May 14 '20 at 22:03

0 Answers0