## How to speed up time when calculate cosine similarity using nested loops in python

I'm trying to calculate the cosine similarity between all the values.

The time for 1000*20000 calculations cost me more than 10 mins.

Code:

from gensim import matutils # array_A contains 1,000 TF-IDF values # array_B contains 20,000 TF-IDF values for x in array_A: for y in array_B: matutils.cossim(x,y)

It's necessary to using gensim package to get the tf-idf value and similarity calculation.

Can someone please give me some advice and guidance to speed up time?

use memoize and also maybe use tuples (it may be faster) for the arrays:

def memoize(f): memo = {} def helper(a, b): if (b, a) in memo: return memo[b, a] elif (a, b) in memo: return memo[a, b] else: memo[(a, b)] = f(a, b) return memo[a, b] return helper @memoize def myfunc(a, b): matutils.cossim(x,y)

**EDIT**
also after using the code above maybe add this just in case you are doing something else with the data

cossim_responses = [myfunc(a, b) for a in array_A for b in array_B] # you could also do (myfunc(a, b) for a in array_A for b in array_B)

**Speed up Cosine Similarity computations in Python using Numba,** However, the same makes it slower during run time as it has to compile and execute each statement every time. This becomes a problem during� 0 How to speed up time when calculate cosine similarity using nested loops in python Sep 18 '17. 0 How to combine multiple lists of string columns in python? May 17 '17.

You can look at the source for gensim's `matutils.cossim()`

:

You'll see it's doing a bit of work on its two (sparse-array) arguments to move their non-zero dimensions into temporary dicts, then calculating their lengths – which is repeated every time the same vector is supplied in your loops.

You might get a reasonable speedup by doing those steps on each vector only once, and remembering those dicts & lengths for re-use on each final pairwise calculation. (That is, memoizing the interim values, rather than just the final values.)

**Cosine similarity of one vector with many,** I'm using Microsoft R (with Intel MKL) which makes matrix multiplications faster, but for fair comparison I set it to be single threaded. setMKLthreads(1). Average similarity float: 0.2627112865447998 Average similarity percentage: 26.27112865447998 Average similarity rounded percentage: 26 Now, we can say that query document (demofile2.txt) is 26% similar to main documents (demofile.txt)

You can use Nmslib or Faiss for vector search operations

**Look Ma, No For-Loops: Array Programming With ,** One option suited for fast numerical operations is NumPy, which deservedly Data that tracks attributes of a cohort (group) of individuals over time could be This practice of replacing explicit loops with array expressions is commonly referred to as vectorization. Formula for calculating Euclidean distance between points. As I just mentioned, we calculate the (cosine of the) angle between these dots (encodings), to compare how semantically equal the sentences are. Since each encoding already has length 1, we only need to calculate the internal product. The internal product calculates the cosine of the angle between the red and the blue dot, resulting in a value.

**(PDF) A Chunking Method for Euclidean Distance Matrix Calculation ,** performance. The experimental results have shown a speed up of 15x on datasets which contain more than half million data points. distance and Cosine distance, because distance is a good of large datasets, the time complexity for calculating one calculate the distance matrix instead of using nested loops. Furthermore, joining took up over 90% of the time while the actual cosine calculation was almost done instantaneously. The whole process can be seen as splitting the pairwise similarity calculations into 46,596 summands: ∑ i = 1 46 , 596 46 , 596 − i = ( 46 , 596 2 ) While the first patient is compared to 46,595 other patients, the next to

**How to not calculate a matrix twice,** Log In � Sign Up The short version: I've realized I'll be doing twice the calculation work I user 2 in the loop, you've already calculated their similarity with user 1, matrices into vectors and use the concept of cosine similarity, making just try the nested-loop version first though, it might be fast enough. For many applications, this isn’t a bad price to pay for the speed and scalability we get at query time. 9. Conclusion. In this post we described a trick to speed up the standard solution for the k-NN problem with cosine similarity. The mathematical rationale for the trick was presented, as well as experiments that prove its validity.

**[PDF] Fast Algorithm for Finding Maximum Distance with Space ,** Finding an exact maximum distance of two points in the given set is significant speed up compared to the standard algorithm. processed, then the BF algorithm leads to very bad time performance. nested loops. cos 0,2 , ∙ ∙ sin 0, 2. Compare the time to allocate zeros to a matrix using nested loops and using the zeros function. x = 1000; y = 500; g = @() preAllocFcn(x,y); h = @() zeros(x,y); diffRunTime = timeit(g)-timeit(h) diffRunTime = 0.1584