Skip to content Skip to sidebar Skip to footer

Pytorch Most Efficient Jacobian/hessian Calculation

I am looking for the most efficient way to get the Jacobian of a function through Pytorch and have so far come up with the following solutions: # Setup def func(X): return torc

Solution 1:

The most efficient method is likely to use PyTorch's own inbuilt functions:

torch.autograd.functional.jacobian(func, x)
torch.autograd.functional.hessian(func, x)

Solution 2:

I had a similar problem which I solved by defining the Jacobian manually (calculating the derivatives by hand). For my problem this was feasible, but I can imagine that is not always the case. The computation time speeds up some factors on my machine (cpu), compared to the second solution.

# Solution 2
def Jacobian(f,X):
    X_batch = Variable(X.repeat(3,1), requires_grad=True)
    f(X_batch).backward(torch.eye(3).cuda(),  retain_graph=True)
    return X_batch.grad

%timeit Jacobian(func,X)
11.7 ms ± 130 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

# Solution 3
def J_func(X):
    return torch.stack(( 
                 2*X,
                 3*X.pow(2),
                 4*X.pow(3)
                  ),1)

%timeit J_func(X)
539 µs ± 24.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

Post a Comment for "Pytorch Most Efficient Jacobian/hessian Calculation"