'How to make request without blocking (using asyncio)?

I would like to achieve the following using asyncio:

# Each iteration of this loop MUST last only 1 second
while True:
    # Make an async request

    sleep(1)

However, the only examples I've seen use some variation of

async def my_func():
    loop = asyncio.get_event_loop()
    await loop.run_in_executor(None, requests.get, 'http://www.google.com')

loop = asyncio.get_event_loop()
loop.run_until_complete(my_func())

But run_until_complete is blocking! Using run_until_complete in each iteration of my while loop would cause the loop to block.

I've spent the last couple of hours trying to figure out how to correctly run a non-blocking task (defined with async def) without success. I must be missing something obvious, because something as simple as this should surely be simple. How can I achieve what I have described?



Solution 1:[1]

run_until_complete runs the main event loop. It's not "blocking" so to speak, it just runs the event loop until the coroutine you passed as a parameter returns. It has to hang because otherwise, the program would either stop or be blocked by the next instructions.

It's pretty hard to tell what you are willing to achieve, but this piece code actually does something:

async def my_func():
    loop = asyncio.get_event_loop()
    while True:
        res = await loop.run_in_executor(None, requests.get, 'http://www.google.com')
        print(res)
        await asyncio.sleep(1)
loop = asyncio.get_event_loop()
loop.run_until_complete(my_func())

It will perform a GET request on Google homepage every seconds, popping a new thread to perform each request. You can convince yourself that it's actually non-blocking by running multiple requests virtually in parallel:

async def entrypoint():
    await asyncio.wait([
        get('https://www.google.com'),
        get('https://www.stackoverflow.com'),
    ])

async def get(url):
    loop = asyncio.get_event_loop()
    while True:
        res = await loop.run_in_executor(None, requests.get, url)
        print(url, res)
        await asyncio.sleep(1)

loop = asyncio.get_event_loop()
loop.run_until_complete(entrypoint())

Another thing to notice is that you're running requests in separate threads each time. It works, but it's sort of a hack. You should rather be using a real asynchronus HTTP client such as aiohttp.

Solution 2:[2]

This is Python 3.10

asyncio is single threaded execution, using await to yield the cpu to other function until what is await'ed is done.

    import asyncio
    async def my_func(t):
        print("Start my_func")
        await asyncio.sleep(t)  # The await yields cpu, while we wait
        print("Exit my_func")

    async def main():
        asyncio.ensure_future(my_func(10)) # Schedules on event loop, we might want to save the returned future to later check for completion.
        print("Start main")
        await asyncio.sleep(1) # The await yields cpu, giving my_func chance to start.
        print("running other stuff")
        await asyncio.sleep(15)
        print("Exit main")

    if __name__ == "__main__":
        asyncio.run(main())  # Starts event loop

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2 Eric Aya