Hot questions for Using Neural networks in calculus


Are there any known approaches of making a machine learn calculus?

I've learnt that it is quite simple to teach calculating derivatives because it is possiblen to implement an algorithm.

Meanwhile, an implementation of integration is possible but is rarely or never fully implemented due to the algorithmical complexity.

I am curious whether there are any academic successes in the field of using machine learning science to evaluate and calculate integrals.


I am interested in teaching a computer to integrate using neural networks or similar methods.


My personal opinion it is not possible to feed into NN enough rules for integrating. Why? Because NN are good for linear regression ( AKA approximation ) or logical regression ( AKA classification ). Integration is neither of them. It is calculation task according to some strict algorithms. So from this prospective it's good idea to use some mathematical ways to integrate.


I am trying to produce a mathematical operation selection nn model, which is based on the scalar input. The operation is selected based on the softmax result which is produce by the nn. Then this operation has to be applied to the scalar input in order to produce the final output. So far I’ve come up with applying argmax and onehot on the softmax output in order to produce a mask which then is applied on the concated values matrix from all the possible operations to be performed (as show in the pseudo code below). The issue is that neither argmax nor onehot appears to be differentiable. I am new to this, so any would be highly appreciated. Thanks in advance.

    #perform softmax    
    logits  = tf.matmul(current_input, W) + b
    softmax = tf.nn.softmax(logits)

    #perform all possible operations on the input
    op_1_val = tf_op_1(current_input)
    op_2_val = tf_op_2(current_input)
    op_3_val = tf_op_2(current_input)
    values = tf.concat([op_1_val, op_2_val, op_3_val], 1)

    #create a mask
    argmax  = tf.argmax(softmax, 1)
    mask  = tf.one_hot(argmax, num_of_operations)

    #produce the input, by masking out those operation results which have not been selected
    output = values * mask


I believe that this is not possible. This is similar to Hard Attention described in this paper. Hard attention is used in Image captioning to allow the model to focus only on a certain part of the image at each step. Hard attention is not differentiable but there are 2 ways to go around this:

1- Use Reinforcement Learning (RL): RL is made to train models that makes decisions. Even though, the loss function won't back-propagate any gradients to the softmax used for the decision, you can use RL techniques to optimize the decision. For a simplified example, you can consider the loss as penalty, and send to the node, with the maximum value in the softmax layer, a policy gradient proportional to the penalty in order to decrease the score of the decision if it was bad (results in a high loss).

2- Use something like soft attention: instead of picking only one operation, mix them with weights based on the softmax. so instead of:

output = values * mask


output = values * softmax

Now, the operations will converge down to zero based on how much the softmax will not select them. This is easier to train compared to RL but it won't work if you must completely remove the non-selected operations from the final result (set them to zero completely).

This is another answer that talks about Hard and Soft attention that you may find helpful:


I have data pairs (x,y) which are created by a cubic function

y = g(x) = ax^3 − bx^2 − cx + d 

plus some random noise. Now, I want to fit a model (parameters a,b,c,d) to this data using gradient descent.

My implementation:


def model(param,x,y,derivative=False):
    y_hat = param["a"]*x3+param["b"]*x2+param["c"]*x+param["d"]
    if derivative==False:
        return y_hat
    derv={} #of Cost function w.r.t parameters
    m = len(y_hat)
    return derv

def cost(y_hat,y): 
    return (np.sum(np.power(y_hat-y,2)))/len(y)

def optimizer(param,x,y,lr=0.01,epochs = 100):
    for i in range(epochs):
        y_hat = model(param,x,y)
        derv = model(param,x,y,derivative=True)
        if i%10==0:
            #print (y,y_hat)

X = np.array(x)
Y = np.array(y)

When run, the cost seems to be increasing:


Did I compute the gradients wrong? I don't know why the cost is exploding.

Here is the data:


If I run your code with e.g. lr=1e-4, the cost decreases.

Check your gradients (just print the result of model(..., True)), you will see that they are quite large. As your learning rate is also not too small, you are likely oscillating away from the minimum (see any ML textbook for example plots of this, you should also be able to see this if you just print your parameters after every iteration).