Any python package to speed up for loop computation?

I have two lists L and C, both sorted from smallest to largest. L contains positive integers, C contains both positive integers and positive fractional numbers (e.g. 0.01,0.05,..,100). Length of C is fixed at 6000+, length of L is variable (between 2 and 3000).

The goal is: given some constant M, find l from L and c from C s.t. l*c<=M and as close as possible to M.

Currently I'm using for loop over C and binary search over list L to find out the largest l*c which is <=M. However it is very slow.

candidate_list = []
for c in C:
    binary search on list L using while loop to find out the best l*c<=M
    candidate_list.append(best l*c)
print(max(candidate_list))

Given length of L to be N, using binary search will take logN. However, since length of C is 6000+, for loop over c will be slow. And if I have multiple lists L with different lengths, using for loop will be very slow. May I know is there any numpy or scipy package to speed up calculation?

Note: Since I have many lists L, I can't just simply do numpy matrix multiplication between L and C_transpose and use argmax to find out the max l*c which is <=M.

Because both lists are sorted, it is enough to use linear algorithm:

Traverse one list in forward direction, find best pair for item[A] from the second list (say at index K)

For the next item[A+1] paired item definitely has smaller or equal index as previous (K), so you need only one run through the second list.

Pseudocode:

 iL = len(L)-1
 for iC in range(len(C)):
     while L[iL] * C[iC] > M:
          iL -= 1
     use pair  L[iL], C[iC]

Speeding up Python Code: Fast Filtering and Slow Loops, Because both lists are sorted, it is enough to use linear algorithm: Traverse one list in forward direction, find best pair for item[A] from the� This article shows some basic ways on how to speed up computation time in Python. With the example of filtering data, we will discuss several approaches using pure Python, numpy, numba, pandas as well as k-d-trees. Fast Filtering of Datasets. As an example task, we will tackle the problem of efficiently filtering datasets.

User @Mbo made a good point in his answer:

Traverse one list in forward direction and find best pair for item[A] from the second list, but starting the search from the back of the second list. For the next item[A+1], its pair item definitely has to be smaller or equal index as previous (K), so you need only one run through the second list.

Here is a sample implementation of the pseudocode he provides (linear time complexity, bound to the length of your largest list, which would be list C from your question):

def find(list_c, list_l, threshold):
    # all pairs of elements whose product is smaller than 'threshold'
    possible_pairs = []

    j = len(list_l) - 1
    for i in range(len(list_c)):
        while list_c[i] * list_l[j] > threshold:
            # product is too big, pick a smaller element from 'list_l'
            j -= 1

            if j < 0:
                # exit while loop
                break

        if j < 0:
            # exit for loop
            break

        # we store some extra info here
        possible_pairs.append({
            'c_index': i,
            'c_elem': list_c[i],
            'l_index': j,
            'l_elem': list_l[j],
            'product': list_c[i] * list_l[j],
        })

    print(possible_pairs)

    # return the pair with the biggest product (closest to threshold)
    return max(
        possible_pairs,
        key=lambda x: x['product'])

I also tested this solution:

import random

list_c = list(sorted(random.random()*100 for i in range(100)))
list_l = list(sorted(random.random()*100 for i in range(20)))
print('list_c', list_c)
print('list_l', list_l)

elem = find(list_c, list_l, threshold=50)

print('the best pair is')
print(elem)

The last print outputs something like:

{
    'c_index': 47,
    'c_elem': 46.42324820342966,
    'l_index': 0,
    'l_elem': 1.0709460533705695,
    'product': 49.716794448105375,
}

As you can see, a solution like this could be used to sequentially compute the search against many L lists that you mention in your question.

Any python package to speed up for loop computation?, 5 tips to speed up your Python code. Know the basic data structures. As already mentioned here dicts and sets use hash tables so have O(1) lookup performance. Reduce memory footprint. msg = 'line1\n' msg += 'line2\n' msg += 'line3\n' Use builtin functions and libraries. Move calculations outside the loop. Keep your Loops. Python supports a couple of looping constructs. The for statement is most commonly used. It loops over the elements of a sequence, assigning each to the loop variable. If the body of your loop is simple, the interpreter overhead of the for loop itself can be a substantial amount of the overhead. This is where the map function is handy.

The numba package. It is specifically designed to speed up python for loops.

From their website: Numba translates Python functions to optimized machine code at runtime using the industry-standard LLVM compiler library. Numba-compiled numerical algorithms in Python can approach the speeds of C or FORTRAN.

5 tips to speed up your Python code, If you have slow loops in Python, you can fix it…until you can't Let's take a computational problem as an example, write some code, and see how we can You are willing to buy no more than one share of each stock. Data science with Python: Turn your conditional loops to Numpy vectors It pays to even vectorize conditional loops for speeding up the overall data transformation. In this article, we show, how using a simple extension library, called ‘ NumExpr ’, one can improve the speed of the mathematical operations, which the core Numpy and Pandas yield.

If you have slow loops in Python, you can fix it…until you can't, What are the main factors affecting the speed of Python loops? import random lowerlist = ['abcdefghijklmnopqrstuvwxyz'[:random.randint(0,25)] for x in Since these methods are called many times, any tiny overhead eventually adds up. And we got a significant speed boost — from 3.55 ms to 1.94 ms on average. Note that we ran the same computation 200 times in a 10-loop test to calculate the execution time. Now, of course, the exact results are somewhat dependent on the underlying hardware. You are welcome to evaluate this on your machine and see what improvement you got.

Python Performance Tips: Optimizing Loops, When you're working in Python, loops are common. More important, it's notably faster when running in code. When you started learning Python, you probably got advice to import all the modules you're using at the start The results could be rendered web pages or the results of complex calculations. When implementing a new algorithm is thus recommended to start implementing it in Python using Numpy and Scipy by taking care of avoiding looping code using the vectorized idioms of those libraries. In practice this means trying to replace any nested for loops by calls to equivalent Numpy array methods. The goal is to avoid the CPU wasting time

20 Simple Python Performance Tuning Tips, However sometimes loops are unavoidable, and then python speed can get on This is netCDF files, so we need netCDF4 library: It takes year as an input and then just summs up all the numbers from the file one by one. At this speed you'd need 3 days to complete this task. Even if you could also vectorize the remaining loop away I don't think that would yield any great speedups because at that level the loop overhead is much less than the computation time in each loop. Using line-profiler supports this statement:

Comments
  • If your loop is very large, try it by pypy instead of python (CPython)
  • If you are working with numerical data and not Python objects, numba can generate very fast code, and can even automatically parallelize your loops if the iterations are independent from each other.
  • That is a nice observation. I added an answer with a sample implementation of this idea, and it seems to work just fine.
  • Thanks for the answer. I also used some break condition inside my code actually. However, since there are 2000 L lists, and length of C is 6000+, if we use for loops, it will still be 2000*6000 iterations. Hence, running time is still pretty long..