'Python async requests + writing files confusion

I have a simple python async program that retrieves a gzipped file from a particular URL. I have used the aiohttp for async requests. As per the aiohttp docs (https://docs.aiohttp.org/en/stable/client_quickstart.html), I have used their example under 'Streaming Response Content' in my test method to write the data.

async def main(url):
    async with aiohttp.ClientSession() as session:
        await asyncio.gather(test(session, url))


async def test(session, url):
    async with session.get(url=url) as r:
        with open('test.csv.gz', 'wb') as f:
            async for chunk in r.content.iter_chunked(1024):
                f.write(chunk)

However, I am not sure if the stuff in test() is actually asynchronous or not. Many articles I've read mention the requirement of the 'await' keyword in async coroutines to activate the asynchronicity (e.g. something like r = await session.get(url=url)), but Im wondering if it is the 'async with' and 'async for' patterns that also achieve the same thing?

What I am hoping to achieve is async functionality when doing session.get(), as well as when writing the data to local, such that if I pass in many urls it will a) perform async switching when getting the url and b) perform async switching when writing the data onto local.

For b), would I need to use something like the following?

async with aiofiles.open('test.csv.gz', 'wb') as f:
    async for chunk in r.content.iter_chunked(1024):
        await f.write()

This leads me to a slightly off-topic question, but what is the difference between async with session.get(url=url) as r: and r = await session.get(url=url)?

Please let me know if my understanding is flawed or if there is something fundamental I am missing regarding the async functionality!



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source