Compute divergence of vector field using python

Related searches

Is there a function that could be used for calculation of the divergence of the vectorial field? (in matlab) I would expect it exists in numpy/scipy but I can not find it using Google.

I need to calculate div[A * grad(F)], where

F = np.array([[1,2,3,4],[5,6,7,8]]) # (2D numpy ndarray)

A = np.array([[1,2,3,4],[1,2,3,4]]) # (2D numpy ndarray)

so grad(F) is a list of 2D ndarrays

I know I can calculate divergence like this but do not want to reinvent the wheel. (I would also expect something more optimized) Does anyone have suggestions?

Just a hint for everybody reading that:

the functions above do not compute the divergence of a vector field. they sum the derivatives of a scalar field A:

result = dA/dx + dA/dy

in contrast to a vector field (with three dimensional example):

result = sum dAi/dxi = dAx/dx + dAy/dy + dAz/dz

Vote down for all! It is mathematically simply wrong.

Cheers!

Chapter 6: Divergence of a vector field, This flow represents a vector field, in which you could insert test but can you figure out what invalid value python is complaining about? Before evaluating some vector-field operators, and in the corresponding Python variables x, How to compute a gradient, a divergence or a curl.

import numpy as np

def divergence(field):
    "return the divergence of a n-D field"
    return np.sum(np.gradient(field),axis=0)

Compute divergence of vector field using python, Is there a function that could be used for calculation of the divergence of the vectorial field? (in matlab) I would expect it exists in numpy/scipy� A python back-end web app to visualize a two dimensional vector field, and to calculate divergence and curl at any point. - vivek3141/vector-field-visualizer

The answer of @user2818943 is good, but it can be optimized a little:

def divergence(F):
    """ compute the divergence of n-D scalar field `F` """
    return reduce(np.add,np.gradient(F))

Timeit:

F = np.random.rand(100,100)
timeit reduce(np.add,np.gradient(F))
# 1000 loops, best of 3: 318 us per loop

timeit np.sum(np.gradient(F),axis=0)
# 100 loops, best of 3: 2.27 ms per loop

About 7 times faster: sum implicitely construct a 3d array from the list of gradient fields which are returned by np.gradient. This is avoided using reduce


Now, in your question what do you mean by div[A * grad(F)]?

  1. about A * grad(F): A is a 2d array, and grad(f) is a list of 2d arrays. So I considered it means to multiply each gradient field by A.
  2. about applying divergence to the (scaled by A) gradient field is unclear. By definition, div(F) = d(F)/dx + d(F)/dy + .... I guess this is just an error of formulation.

For 1, multiplying summed elements Bi by a same factor A can be factorized:

Sum(A*Bi) = A*Sum(Bi)

Thus, you can get this weighted gradient simply with: A*divergence(F)

If ̀A is instead a list of factor, one for each dimension, then the solution would be:

def weighted_divergence(W,F):
    """
    Return the divergence of n-D array `F` with gradient weighted by `W`

    ̀`W` is a list of factors for each dimension of F: the gradient of `F` over
    the `i`th dimension is multiplied by `W[i]`. Each `W[i]` can be a scalar
    or an array with same (or broadcastable) shape as `F`.
    """
    wGrad = return map(np.multiply, W, np.gradient(F))
    return reduce(np.add,wGrad)

result = weighted_divergence(A,F)

numdiff, The curl, divergence, and laplacian operators, are not explicitly a part of the package from which it is easy to compute the desired operators. import numpy as np import numdifftools as nd # See https://pypi.python.org/pypi/ Numdifftools import matplotlib as mpl Curl of vector field f evaluated at x def curl(f,x): jac = nd. I want to calculate the divergence at each grid box using u and v wind fields (which are just scaled by specific humidity, q), from a netCDF climate model file. From endless searching I know I need to use some combination of np.gradient and np.sum, but can't find the right combination.

Based on Juh_'s answer, but modified for the correct divergence of a vector field formula

def divergence(f):
    """
    Computes the divergence of the vector field f, corresponding to dFx/dx + dFy/dy + ...
    :param f: List of ndarrays, where every item of the list is one dimension of the vector field
    :return: Single ndarray of the same shape as each of the items in f, which corresponds to a scalar field
    """
    num_dims = len(f)
    return np.ufunc.reduce(np.add, [np.gradient(f[i], axis=i) for i in range(num_dims)])

Matlab's documentation uses this exact formula (scroll down to Divergence of a Vector Field)

Scalar and Vector Field Functionality — SymPy 1.6.2 documentation, Examples of vector quantities are displacement, velocity, magnetic field, etc. To compute the divergence of a vector field in sympy.physics.vector , you would� To calculate the curl of a vector function you can also use numdifftools for automatic numerical differentiation without a detour through symbolic differentiation. . Numdifftools doesn't provide a curl() function, but it does compute the Jacobian matrix of a vector valued function of one or more variables, and this provides the derivatives of all components of a vector field with respect to

What Daniel had modified is the right answer, let me explain self defined func divergence further in more detail :

Function np.gradient() defined as : np.gradient( f) = df/dx, df/dy, df/dz +...

but we need define func divergence as : divergence ( f) = dfx/dx + dfy/dy + dfz/dz +... = np.gradient( fx) + np.gradient(fy) + np.gradient(fz) + ...

Let's test, compare with example of divergence in matlab

import numpy as np
import matplotlib.pyplot as plt

NY = 50
ymin = -2.
ymax = 2.
dy = (ymax -ymin )/(NY-1.)

NX = NY
xmin = -2.
xmax = 2.
dx = (xmax -xmin)/(NX-1.)

def divergence(f):
    num_dims = len(f)
    return np.ufunc.reduce(np.add, [np.gradient(f[i], axis=i) for i in range(num_dims)])

y = np.array([ ymin + float(i)*dy for i in range(NY)])
x = np.array([ xmin + float(i)*dx for i in range(NX)])

x, y = np.meshgrid( x, y, indexing = 'ij', sparse = False)

Fx  = np.cos(x + 2*y)
Fy  = np.sin(x - 2*y)

F = [Fx, Fy]
g = divergence(F)

plt.pcolormesh(x, y, g)
plt.colorbar()
plt.savefig( 'Div' + str(NY) +'.png', format = 'png')
plt.show()

Example for curl and div of a 2D vector field, Plot the 2D vector field F = ( cos(x+2*y), sin(x-2*y) ) for x,y in [-2,2]. g = divergence(F,[x y]) % find divergence vectorfield(F,-2:.25:2,-2:.25:2); hold on ezpcolor(g� The divergence of a vector field F = <P,Q,R> is defined as the partial derivative of P with respect to x plus the partial derivative of Q with respect to y plus the partial derivative of R with respect to z. The divergence of a vector field is also given by:

div = divergence (X,Y,U,V) computes the divergence of a 2-D vector field U, V. The arrays X and Y, which define the coordinates for U and V, must be monotonic, but do not need to be uniformly spaced. X and Y must have the same number of elements, as if produced by meshgrid. div = divergence (U,V) assumes X and Y are determined by the expression

Given the vector field →F =P →i +Q→j +R→k F → = P i → + Q j → + R k → the divergence is defined to be, div →F = ∂P ∂x + ∂Q ∂y + ∂R ∂z div F → = ∂ P ∂ x + ∂ Q ∂ y + ∂ R ∂ z There is also a definition of the divergence in terms of the ∇ ∇ operator.

Gradient of Element-Wise Vector Function Combinations. Element-wise binary operators are operations (such as addition w+x or w>x which returns a vector of ones and zeros) that applies an operator consecutively, from the first item of both vectors to get the first item of output, then the second item of both vectors to get the second item of output…and so forth.

Comments
  • What order accuracy do you need? are your arrays equally spaced?
  • blog.sun.tc/2010/10/jensenshannon-divergence-in-numpy.html
  • @mgilson Yes, arrays are equally spaced. I need double precision.
  • @ZagorulkinDmitry, Jensen–Shannon divergence is something completely different
  • @nyvltak -- Not precision, order. As in O(h) or O(h**2), and what is the spacing? ...
  • Kind of weird how this is at the bottom of the page. The other answers are really mathematically incorrect
  • This may be mathematically correct, but is only a first step towards an answer. The current text does not answer the question. There are updated answers below that actually answers the question at hand.
  • This answer could be improved by also telling me the answer to the question I googled, as well as telling me the other answers here are not what I want.
  • Isn't the divergence of a vector field F = d(Fx)/dx + d(Fy)/dy + ... ? The correct formula would be something more like np.ufunc.reduce(np.add, [np.gradient(F[i], axis=i) for i in range(len(F))])
  • That all depends on the type of data in F. The question is unclear. I have experience in image processing, thus I consider F to be a nD image, and thus the gradient is the derivatives by axis x then y (and more if there are). Which I sum up. If I understand correctly, if F is a n*m 2D sequence of n vector that are m-dimensional, then I guess your formulation is the correct one. HOwever, I would not understand it if F is more than 2d