Python Multiprocessing.Pool lazy iteration

python multiprocessing iterator
python multiprocessing pool queue
multiprocessing pool initializer
python parallel for loop multiprocessing
python multiprocessing pool vs process
python multiprocessing context manager
python pool starmap
python multiprocessing dummy

I'm wondering about the way that python's Multiprocessing.Pool class works with map, imap, and map_async. My particular problem is that I want to map on an iterator that creates memory-heavy objects, and don't want all these objects to be generated into memory at the same time. I wanted to see if the various map() functions would wring my iterator dry, or intelligently call the next() function only as child processes slowly advanced, so I hacked up some tests as such:

def g():
  for el in xrange(100):
    print el
    yield el

def f(x):
  return x*x

if __name__ == '__main__':
  pool = Pool(processes=4)              # start 4 worker processes
  go = g()
  g2 = pool.imap(f, go)

And so on with map, imap, and map_async. This is the most flagrant example however, as simply calling next() a single time on g2 prints out all my elements from my generator g(), whereas if imap were doing this 'lazily' I would expect it to only call once, and therefore print out only '1'.

Can someone clear up what is happening, and if there is some way to have the process pool 'lazily' evaluate the iterator as needed?



Let's look at the end of the program first.

The multiprocessing module uses atexit to call multiprocessing.util._exit_function when your program ends.

If you remove, your program ends quickly.

The _exit_function eventually calls Pool._terminate_pool. The main thread changes the state of pool._task_handler._state from RUN to TERMINATE. Meanwhile the pool._task_handler thread is looping in Pool._handle_tasks and bails out when it reaches the condition

            if thread._state:
                debug('task handler found thread._state != RUN')

(See /usr/lib/python2.6/multiprocessing/

This is what stops the task handler from fully consuming your generator, g(). If you look in Pool._handle_tasks you'll see

        for i, task in enumerate(taskseq):
            except IOError:
                debug('could not put task on queue')

This is the code which consumes your generator. (taskseq is not exactly your generator, but as taskseq is consumed, so is your generator.)

In contrast, when you call the main thread calls, and waits when it reaches self._cond.wait(timeout).

That the main thread is waiting instead of calling _exit_function is what allows the task handler thread to run normally, which means fully consuming the generator as it puts tasks in the workers' inqueue in the Pool._handle_tasks function.

The bottom line is that all Pool map functions consume the entire iterable that it is given. If you'd like to consume the generator in chunks, you could do this instead:

import multiprocessing as mp
import itertools
import time

def g():
    for el in xrange(50):
        print el
        yield el

def f(x):
    return x * x

if __name__ == '__main__':
    pool = mp.Pool(processes=4)              # start 4 worker processes
    go = g()
    result = []
    N = 11
    while True:
        g2 =, itertools.islice(go, N))
        if g2:

Python Multiprocessing Lazy Iterating Map, Python Multiprocessing Lazy Iterating Map. Python multiprocessing function that processes work in a pool of worker functions. The advantage 

I had this problem too and was disappointed to learn that map consumes all its elements. I coded a function which consumes the iterator lazily using the Queue data type in multiprocessing. This is similar to what @unutbu describes in a comment to his answer but as he points out, suffers from having no callback mechanism for re-loading the Queue. The Queue datatype instead exposes a timeout parameter and I've used 100 milliseconds to good effect.

from multiprocessing import Process, Queue, cpu_count
from Queue import Full as QueueFull
from Queue import Empty as QueueEmpty

def worker(recvq, sendq):
    for func, args in iter(recvq.get, None):
        result = func(*args)

def pool_imap_unordered(function, iterable, procs=cpu_count()):
    # Create queues for sending/receiving items from iterable.

    sendq = Queue(procs)
    recvq = Queue()

    # Start worker processes.

    for rpt in xrange(procs):
        Process(target=worker, args=(sendq, recvq)).start()

    # Iterate iterable and communicate with worker processes.

    send_len = 0
    recv_len = 0
    itr = iter(iterable)

        value =
        while True:
                sendq.put((function, value), True, 0.1)
                send_len += 1
                value =
            except QueueFull:
                while True:
                        result = recvq.get(False)
                        recv_len += 1
                        yield result
                    except QueueEmpty:
    except StopIteration:

    # Collect all remaining results.

    while recv_len < send_len:
        result = recvq.get()
        recv_len += 1
        yield result

    # Terminate worker processes.

    for rpt in xrange(procs):

This solution has the advantage of not batching requests to One individual worker can not block others from making progress. YMMV. Note that you may want to use a different object to signal termination for the workers. In the example, I've used None.

Tested on "Python 2.7 (r27:82525, Jul 4 2010, 09:01:59) [MSC v.1500 32 bit (Intel)] on win32"

Issue 40110: multiprocessing.Pool.imap() should be lazy, I did come up with one possibility after sleeping on it again: run the final iteration in parallel, perhaps by a special plist() method that makes as 

What you want is implemented in the NuMap package, from the website:

NuMap is a parallel (thread- or process-based, local or remote), buffered, multi-task, itertools.imap or multiprocessing.Pool.imap function replacement. Like imap it evaluates a function on elements of a sequence or iterable, and it does so lazily. Laziness can be adjusted via the "stride" and "buffer" arguments.

Issue 12897: Support for iterators in multiprocessing map, If you want lazy operation then you should use imap(f, it[, chunksize]) rather than using map_async(f, it). This will return an iterator rather than a 

In this example (see code, please) 2 workers.

Pool work as expected: when worker is free, then to do next iteration.

This code as code in topic, except one thing: argument size = 64 k.

64 k - default socket buffer size.

import itertools
from multiprocessing import Pool
from time import sleep

def f( x ):
    print( "f()" )
    sleep( 3 )
    return x

def get_reader():
    for x in range( 10 ):
        print( "readed: ", x )
        value = " " * 1024 * 64 # 64k
        yield value

if __name__ == '__main__':

    p = Pool( processes=2 )

    data = p.imap( f, get_reader() )


multiprocessing.pool.Pool Python Example, This page provides Python code examples for multiprocessing.pool.Pool. results =, range(iterations)) return results. -- multiprocessing tool to help simplify parallel function , import multiprocessing as mp *NOT* require functions to be pickleable (unlike. vanilla Performs SEMI-lazy iteration based on chunksize. (via multiprocessing) and python threads (via the multiprocessing.​dummy.

10x Faster Parallel Python Without Python Multiprocessing, While Python's multiprocessing library has been used successfully for a wide the for loop below takes 0.84s with Ray, 7.5s with Python multiprocessing, The challenge here is that executes stateless functions 

Using map with queue.put()?, python multiprocessing pool queue map(q.put, my_list) just returns an iterator. Unless you iterate through it, your queue q wont be populated In other words, in Python 3.x map() returns a lazy iterator, rather than a list as in Python 2.x, 

  • After removing the time.sleep call and adding a print os.getpid(), x in f the behavior looks even more weird, sometimes only 2 or 3 different PID's are printed and always do a different amount of iterations... BTW what Python version are you using?
  • Python 2.6.6 (r266:84292, Dec 26 2010, 22:31:48) Standard debian install.
  • Great answer, I ended up re-implementing a thread pool that consumes element by element in the meantime, but your islice solution would have been much less work for me, oh well :-). I tried looking around some and noticed that indeed the map/imap/map_async functions seemed to eat up the iterator right away. It's not clear to me if that really necessary though, especially in the case of standard
  • @Gabe: To consume the iterator just-in-time, I think some extra signaling mechanism would have to be coded in Pool to tell the task handler when to put more tasks in the inqueue. Maybe it's possible, but doesn't currently exist in Pool, and might slow the process down a bit too.
  • Indeed, my solution was to make a task queue of size N*size_of_pool and play around with N until it looked like the queue was keeping a good buffer. Of course this is task-dependent so I can understand that the author of the Pool code didn't want to deal with that. Thanks for your response!
  • What if the generator is such that you don't know the number of elements (100 in this case)?
  • @Vince: You could change the for-loop to a while-loop, and break when the result of is empty. I've edited the post to show what I mean.
  • I've checked on Python 3.3 and neither imap nor imap_unordered does not consume all the arguments before kicking off the mapped function, although map does.
  • +1 This is nearly what I need, but unfortunately I need ordered results.
  • Instead of tuning get/put timeouts for in/out Queues, I normally 1) set a fixed size for both Queues, and 2) let get/put block if the Queue is empty/full. This way there is no need to tune timeouts. There is only the need to check the number of items going into the in-queue and out of the out-queue. The correct order is then: 1) start workers; 2) start out-Queue collector; 3) iterate over input and populate the in-Queue.
  • @neo, it is possible to get ordered results, too. The way to achieve this is to have 4 limited-size Queues [In (data for workers), Out (final, properly ordered results), Serial (keeps track of items in processing), Sort (intermediate Queue)] and 2 types of workers - a single sorter() and N actual workers. The idea is: 1) have a serial number generator; 2) submit each dataset as In.put( (serial, data) ), together with Serial.put(serial); 3) workers do Sort.put((serial, result)); 4) sorter() gets items from Sort, sorts them by serial, and puts into Out.
  • @neo, here's an untested example of sorter():… Forgot to mention that all Queues must have the same size limit, and all queue items are assumed to take approximately the same amount of processing time for this scheme to work (otherwise the sorter()'s internal buffer will start accumulating too many results).