Python Requests: Don't wait for request to finish

python requests post
python requests session
python requests module
python requests response
python requests cookies
python requests proxy
python requests session example
python requests post form data

In Bash, it is possible to execute a command in the background by appending &. How can I do it in Python?

while True:
    data = raw_input('Enter something: ') 
    requests.post(url, data=data) # Don't wait for it to finish.
    print('Sending POST request...') # This should appear immediately.

I use multiprocessing.dummy.Pool. I create a singleton thread pool at the module level, and then use pool.apply_async(requests.get, [params]) to launch the task.

This command gives me a future, which I can add to a list with other futures indefinitely until I'd like to collect all or some of the results.

multiprocessing.dummy.Pool is, against all logic and reason, a THREAD pool and not a process pool.

Example (works in both Python 2 and 3, as long as requests is installed):

from multiprocessing.dummy import Pool

import requests

pool = Pool(10) # Creates a pool with ten threads; more threads = more concurrency.
                # "pool" is a module attribute; you can be sure there will only
                # be one of them in your application
                # as modules are cached after initialization.

if __name__ == '__main__':
    futures = []
    for x in range(10):
        futures.append(pool.apply_async(requests.get, ['http://example.com/']))
    # futures is now a list of 10 futures.
    for future in futures:
        print(future.get()) # For each future, wait until the request is
                            # finished and then print the response object.

The requests will be executed concurrently, so running all ten of these requests should take no longer than the longest one. This strategy will only use one CPU core, but that shouldn't be an issue because almost all of the time will be spent waiting for I/O.

Keep-Alive - Requests, Heads up — if you don't have gevent this will fail: urls = [ 'http://python-requests. org', 'http://httpbin.org', 'http://python-guide.org', 'http://kennethreitz.com' ] rs� The requests library is the de facto standard for making HTTP requests in Python. It abstracts the complexities of making requests behind a beautiful, simple API so that you can focus on interacting with services and consuming data in your application.

Here's a hacky way to do it:

try:
    requests.get("http://127.0.0.1:8000/test/",timeout=0.0000000001)
except requests.exceptions.ReadTimeout: 
    pass

Developer Interface — Requests 2.24.0 documentation, url – URL for the new Request object. params – (optional) Dictionary, list of You probably don't need to call this method: expired cookies are never sent back to Allows client-code to call dict(RequestsCookieJar) and get a vanilla python� The requests module allows you to send HTTP requests using Python. The HTTP request returns a Response Object with all the response data (content, encoding, status, etc). Download and Install the Requests Module

Elegant solution from Andrew Gorcester. In addition, without using futures, it is possible to use the callback and error_callback attributes (see doc) in order to perform asynchronous processing:

def on_success(r: Response):
    if r.status_code == 200:
        print(f'Post succeed: {r}')
    else:
        print(f'Post failed: {r}')

def on_error(ex: Exception):
    print(f'Post requests failed: {ex}')

pool.apply_async(requests.post, args=['http://server.host'], kwargs={'json': {'key':'value'},
                        callback=on_success, error_callback=on_error))

Advanced Usage — Requests 2.7.0 documentation, Sessions can also be used to provide default data to the request methods. This is done by I don't have SSL setup on this domain, so it fails. Excellent. GitHub export HTTPS_PROXY="http://10.10.1.10:1080" $ python >>> import requests� Requests is an Apache2 Licensed HTTP library, written in Python. It is designed to be used by humans to interact with the language. This means you don’t have to manually add query strings to URLs, or form-encode your POST data.

According to the doc, you should move to another library :

Blocking Or Non-Blocking?

With the default Transport Adapter in place, Requests does not provide any kind of non-blocking IO. The Response.content property will block until the entire response has been downloaded. If you require more granularity, the streaming features of the library (see Streaming Requests) allow you to retrieve smaller quantities of the response at a time. However, these calls will still block.

If you are concerned about the use of blocking IO, there are lots of projects out there that combine Requests with one of Python’s asynchronicity frameworks.

Two excellent examples are grequests and requests-futures.

Python's Requests Library (Guide) – Real Python, Let's say you don't want to check the response's status code in an if statement. Instead, you want to raise an exception if the request was unsuccessful. You can � Requests is an elegant and simple HTTP library for Python, built for human beings. You are currently looking at the documentation of the development release. Sponsored by CERT Gouvernemental - GOVCERT.LU .

If you can write the code to be executed separately in a separate python program, here is a possible solution based on subprocessing.

Otherwise you may find useful this question and related answer: the trick is to use the threading library to start a separate thread that will execute the separated task.

A caveat with both approach could be the number of items (that's to say the number of threads) you have to manage. If the items in parent are too many, you may consider halting every batch of items till at least some threads have finished, but I think this kind of management is non-trivial.

For more sophisticated approach you can use an actor based approach, I have not used this library myself but I think it could help in that case.

The Python Requests Module, If you don't receive any errors importing the module, then it was successful. Making a GET Request. GET is by far the most used HTTP method. We can use GET� Requests is a Python module that you can use to send all kinds of HTTP requests. It is an easy-to-use library with a lot of features ranging from passing parameters in URLs to sending custom headers and SSL Verification.

How do I disable the security certificate check in Python requests , From the documentation: requests can also ignore verifying the SSL certificate if you set verify to False. class requests.cookies.RequestsCookieJar (policy=None) [source] ¶ Compatibility class; is a cookielib.CookieJar, but exposes a dict interface. This is the CookieJar we create by default for requests and sessions that don’t specify one, since some clients may expect response.cookies and session.cookies to support dict operations.

requests � PyPI, Python HTTP for Humans. Don't use redirect_cache if allow_redirects=False; When passed objects that throw exceptions from tell(), send them via chunked� Based on CURL_CA_BUNDLE, os.environ['REQUESTS_CA_BUNDLE'] = 'FiddlerRootCertificate_Base64_Encoded_X.509.cer.pem' # your-ca.pem works for Python 3.8.3 when using google-cloud-bigquery 1.24.0 and BigQuery Client Lib for Python – samm May 20 at 10:30

requests 1.2.1, Python HTTP for Humans. be passed to json.loads() via the Response.json() method; Don't send Content-Length header by default on GET or HEAD requests � The answers above didn't work for me. I was trying to do a get request where the parameter contained a pipe, but python requests would also percent encode the pipe.

Comments
  • Unlike CPU-bound concurrency issues in Python, this could possibly be resolved with a separate thread, or the use of multiprocessing.dummy for a thread pool.
  • Your solution looks interesting, but also confusing. What's a future? What's the module level? Could you provide a working example?
  • @octosquidopus added example to answer
  • Your example works well, but that is not exactly what I am trying to do. Instead of sending concurrent requests, I would like to send them one at a time, but without blocking the rest of the code. My example should be now be less ambiguous.
  • I think the formatting should be pool.apply_async(requests.post, [url], {'data': data}). The function signature is essentially (function_to_run, list_of_positional_args, dict_of_kwargs).
  • r isn't a response object, it's a future for a response object. You get the real response with r.get() -- that produces a response object that's the same as any other. If you only want the status code you could do r.get().status_code (note that if the request resulted in an exception, the exception will be raised when you call get()). You can also do response = r.get() and proceed as normal. If you r.get() before the actual asynchronous request is complete, then you will automatically wait until the request is complete before proceeding.
  • You can loss a response this way often. The question was about requests.post and its body is also more fragile with a very short timeout than a simple get.
  • works well when we do not need any response from the api
  • When trying this, the server doesn't receive the request. any idea?
  • try increasing the timeout to 1.0
  • I tried requests-futures, but it fails at csrf = s.cookies['csrftoken'].