## Correlation coefficients and p values for all pairs of rows of a matrix

I have a matrix `data`

with *m* rows and *n* columns. I used to compute the correlation coefficients between all pairs of rows using `np.corrcoef`

:

import numpy as np data = np.array([[0, 1, -1], [0, -1, 1]]) np.corrcoef(data)

Now I would also like to have a look at the p-values of these coefficients. `np.corrcoef`

doesn't provide these; `scipy.stats.pearsonr`

does. However, `scipy.stats.pearsonr`

does not accept a matrix on input.

Is there a quick way how to compute both the coefficient and the p-value for all pairs of rows (arriving e.g. at two *m* by *m* matrices, one with correlation coefficients, the other with corresponding p-values) without having to manually go through all pairs?

I have encountered the same problem today.

After half an hour of googling, I can't find any code in numpy/scipy library can help me do this.

So I wrote my own version of **corrcoef**

import numpy as np from scipy.stats import pearsonr, betai def corrcoef(matrix): r = np.corrcoef(matrix) rf = r[np.triu_indices(r.shape[0], 1)] df = matrix.shape[1] - 2 ts = rf * rf * (df / (1 - rf * rf)) pf = betai(0.5 * df, 0.5, df / (df + ts)) p = np.zeros(shape=r.shape) p[np.triu_indices(p.shape[0], 1)] = pf p[np.tril_indices(p.shape[0], -1)] = p.T[np.tril_indices(p.shape[0], -1)] p[np.diag_indices(p.shape[0])] = np.ones(p.shape[0]) return r, p def corrcoef_loop(matrix): rows, cols = matrix.shape[0], matrix.shape[1] r = np.ones(shape=(rows, rows)) p = np.ones(shape=(rows, rows)) for i in range(rows): for j in range(i+1, rows): r_, p_ = pearsonr(matrix[i], matrix[j]) r[i, j] = r[j, i] = r_ p[i, j] = p[j, i] = p_ return r, p

The first version use the result of np.corrcoef, and then calculate p-value based on triangle-upper values of corrcoef matrix.

The second loop version just iterating over rows, do pearsonr manually.

def test_corrcoef(): a = np.array([ [1, 2, 3, 4], [1, 3, 1, 4], [8, 3, 8, 5], [2, 3, 2, 1]]) r1, p1 = corrcoef(a) r2, p2 = corrcoef_loop(a) assert np.allclose(r1, r2) assert np.allclose(p1, p2)

The test passed, they are the same.

def test_timing(): import time a = np.random.randn(100, 2500) def timing(func, *args, **kwargs): t0 = time.time() loops = 10 for _ in range(loops): func(*args, **kwargs) print('{} takes {} seconds loops={}'.format( func.__name__, time.time() - t0, loops)) timing(corrcoef, a) timing(corrcoef_loop, a) if __name__ == '__main__': test_corrcoef() test_timing()

The performance on my Macbook against 100x2500 matrix

corrcoef takes 0.06608104705810547 seconds loops=10

corrcoef_loop takes 7.585600137710571 seconds loops=10

**Correlation matrix : A quick start guide to analyze, format and ,** It returns both the correlation coefficients and the p-value of the correlation for all possible pairs of columns in the data table. Simplified format: rcorr(x, type� cor_mat: compute correlation matrix with p-values. Returns a data frame containing the matrix of the correlation coefficients. The output has an attribute named "pvalue", which contains the matrix of the correlation test p-values. cor_pmat: compute the correlation matrix but returns only the p-values of the tests.

The most consice way of doing it might be the buildin method `.corr`

in `pandas`

, to get r:

In [79]: import pandas as pd m=np.random.random((6,6)) df=pd.DataFrame(m) print df.corr() 0 1 2 3 4 5 0 1.000000 -0.282780 0.455210 -0.377936 -0.850840 0.190545 1 -0.282780 1.000000 -0.747979 -0.461637 0.270770 0.008815 2 0.455210 -0.747979 1.000000 -0.137078 -0.683991 0.557390 3 -0.377936 -0.461637 -0.137078 1.000000 0.511070 -0.801614 4 -0.850840 0.270770 -0.683991 0.511070 1.000000 -0.499247 5 0.190545 0.008815 0.557390 -0.801614 -0.499247 1.000000

To get p values using t-test:

In [84]: n=6 r=df.corr() t=r*np.sqrt((n-2)/(1-r*r)) import scipy.stats as ss ss.t.cdf(t, n-2) Out[84]: array([[ 1. , 0.2935682 , 0.817826 , 0.23004382, 0.01585695, 0.64117917], [ 0.2935682 , 1. , 0.04363408, 0.17836685, 0.69811422, 0.50661121], [ 0.817826 , 0.04363408, 1. , 0.39783538, 0.06700715, 0.8747497 ], [ 0.23004382, 0.17836685, 0.39783538, 1. , 0.84993082, 0.02756579], [ 0.01585695, 0.69811422, 0.06700715, 0.84993082, 1. , 0.15667393], [ 0.64117917, 0.50661121, 0.8747497 , 0.02756579, 0.15667393, 1. ]]) In [85]: ss.pearsonr(m[:,0], m[:,1]) Out[85]: (-0.28277983892175751, 0.58713640696703184) In [86]: #be careful about the difference of 1-tail test and 2-tail test: 0.58713640696703184/2 Out[86]: 0.2935682034835159 #the value in ss.t.cdf(t, n-2) [0,1] cell

Also you can just use the `scipy.stats.pearsonr`

you mentioned in OP:

In [95]: #returns a list of tuples of (r, p, index1, index2) import itertools [ss.pearsonr(m[:,i],m[:,j])+(i, j) for i, j in itertools.product(range(n), range(n))] Out[95]: [(1.0, 0.0, 0, 0), (-0.28277983892175751, 0.58713640696703184, 0, 1), (0.45521036266021014, 0.36434799921123057, 0, 2), (-0.3779357902414715, 0.46008763115463419, 0, 3), (-0.85083961671703368, 0.031713908656676448, 0, 4), (0.19054495489542525, 0.71764166168348287, 0, 5), (-0.28277983892175751, 0.58713640696703184, 1, 0), (1.0, 0.0, 1, 1), #etc, etc

**Correlation coefficients - MATLAB corrcoef,** R , P ] = corrcoef(___) returns the matrix of correlation coefficients and the matrix of with additional options specified by one or more Name,Value pair arguments . values, and compute the correlation coefficient matrix, excluding any rows� Matrix of Correlations and P-values. rcorr Computes a matrix of Pearson's r or Spearman's rho rank correlation coefficients for all possible pairs of columns of a matrix. Missing values are deleted in pairs rather than deleting all rows of x having any missing variables. Ranks are computed using efficient algorithms (see reference 2), using

Sort of hackish and possibly inefficient, but I think this could be what you're looking for:

import scipy.spatial.distance as dist import scipy.stats as ss # Pearson's correlation coefficients print dist.squareform(dist.pdist(data, lambda x, y: ss.pearsonr(x, y)[0])) # p-values print dist.squareform(dist.pdist(data, lambda x, y: ss.pearsonr(x, y)[1]))

Scipy's pdist is a very helpful function, which is primarily meant for finding Pairwise distances between observations in n-dimensional space.

But it allows user defined callable 'distance metrics', which can be exploited to carry out any kind of pair-wise operation. The result is returned in a condensed distance matrix form, which can be easily changed to the square matrix form using Scipy's 'squareform' function.

**Correlation coefficient and correlation test in R,** Learn how to compute a correlation coefficient (Pearson and Spearman) Between two variables; Correlation matrix: correlations for all variables to compute p-values of the correlation test for several pairs of variables at once. the correlation coefficients (column r ) and the result of the correlation test� P-values, returned as a matrix. P is symmetric and is the same size as R. The diagonal entries are all ones and the off-diagonal entries are the p-values for each variable pair. P-values range from 0 to 1, where values close to 0 correspond to a significant correlation in R and a low probability of observing the null hypothesis.

If you do not have to use pearson correlation coefficient, you can use the spearman correlation coefficient, as it returns both the correlation matrix and p-values (note that the former requires that your data is normally distributed, whereas the spearman correlation is a non-parametric measure, thus not assuming the normal distribution of your data). An example code:

from scipy import stats import numpy as np data = np.array([[0, 1, -1], [0, -1, 1], [0, 1, -1]]) print 'np.corrcoef:', np.corrcoef(data) cor, pval = stats.spearmanr(data.T) print 'stats.spearmanr - cor:\n', cor print 'stats.spearmanr - pval\n', pval

**Correlation tests, correlation matrix, and corresponding visualization ,** Correlation coefficients can be computed in R by using the functions the p- value of the correlation for all possible pairs of columns in the data� p-values for Pearson’s correlation by transforming the correlation to create a t-statistic with numObs – 2 degrees of freedom. The transformation is exact when X is normal. p -values for Kendall’s and Spearman’s rank correlations using either the exact permutation distributions (for small sample sizes) or large-sample approximations.

this is exactly the same performance as the corrcoef in MATLAB:

to have this function work, you will need to install pandas as well as scipy.

# Compute correlation correfficients matrix and p-value matrix # Similar function as corrcoef in MATLAB # dframe: pandas dataframe def corrcoef(dframe): fmatrix = dframe.values rows, cols = fmatrix.shape r = np.ones((cols, cols), dtype=float) p = np.ones((cols, cols), dtype=float) for i in range(cols): for j in range(cols): if i == j: r_, p_ = 1., 1. else: r_, p_ = pearsonr(fmatrix[:,i], fmatrix[:,j]) r[j][i] = r_ p[j][i] = p_ return r, p

**Compute Correlation Matrix with P-values — cor_mat • rstatix,** Numeric columns in the data are detected and automatically selected for the correlation coefficient if there are at least 4 complete pairs of observations. cor_pmat : compute the correlation matrix but returns only the p-values of the tests. Correlation matrix between all variables cor.mat <- mydata %>% cor_mat () cor.mat. Objects of class type matrix are generated containing the correlation coefficients and p-values. Visualizing the correlation matrix. There are several packages available for visualizing a correlation matrix in R. One of the most common is the corrplot function. We first need to install the corrplot package and load the library.

After all, a negative correlation sounds suspiciously like no relationship. However, the scatterplots for the negative correlations display real relationships. For negative correlation coefficients, high values of one variable are associated with low values of another variable.

In statistics, the correlation coefficient r measures the strength and direction of a linear relationship between two variables on a scatterplot. The value of r is always between +1 and –1. To interpret its value, see which of the following values your correlation r is closest to: Exactly –1. A perfect downhill (negative) linear relationship …

The resulting correlation matrix is a new instance of DataFrame and holds the correlation coefficients for the columns xy['x-values'] and xy['y-values']. Such labeled results are usually very convenient to work with because you can access them with either their labels or their integer position indices:

##### Comments

- Is there a reason not to just iterate through the row pairs? It is a bit clumsy, but the code is not very long, and most probably it is not going to be a performance problem, as most time is anyway spent calculating the pearsons. (I.e. do you mean "quick" as in your programming time or "quick" as in performance.) I suggest you take the trivial route and profile the actual performance.
- This code fails with scipy 1.0.0 because the betai function has been removed after deprecation. One should use betainc in the scipy.special module instead.
- Thanks for this solution, helped me a lot! Note that the pvalues in this implementation are set to 0 when you compare the same feature (it returns 0 on the diagonal). However, e.g.,
`scipy.stats.pearsonr`

would return`p=1`

for these cases. - Rather than passing your own Python function for computing the correlation coefficient, you can use
`metric='correlation'`

which is equal to (1 - correlation coefficient), and is coded in C (so should be much more efficient). - He's looking for p-values as well. You won't get the p-values if you use the inbuilt correlation metric.
- You can derive p-values from the correlation coefficients fairly easily (see jingchao's answer and here)
- (also CT Zhu's answer)
- This approach fulfilled my needs, and it seems straightforward to me. Please follow any answer that suits you the most.