'Efficient way to read all the data from a large couchbase bucket

We have about 80M product in a bucket in couchbase and we need to daily read all data from it and do some calculations on them. I am using select queries with 512 limit on each select and a for while to read data from each collecion but sometimes I get timout error. Is there any better and efficient way to read data from couchbase ?

    for x_range in range(math.ceil(total_couchbase / buffer_size)):

        mycollection = couchbase_cluster.query(get_select_query(cluster_name, scope, collection_name, buffer_size, x_range * buffer_size))
        for current_document in mycollection:
             # do some calculations


Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source