'Limiting Memory Use in a *Large* Django QuerySet
I have a task which needs to be run on 'most' objects in my database once every some period of time (once a day, once a week, whatever). Basically this means that I have some query that looks like this running in it's own thread.
for model_instance in SomeModel.objects.all():
do_something(model_instance)
(Note that it's actually a filter() not all() but none-the-less I still end up selecting a very large set of objects.)
The problem I'm running into is that after running for a while the thread is killed by my hosting provider because I'm using too much memory. I'm assuming all this memory use is happening because even though the QuerySet object returned by my query initially has a very small memory footprint it ends up growing as the QuerySet object caches each model_instance as I iterate through them.
My question is, "what is the best way to iterate through almost every SomeModel in my database in a memory efficient way?" or perhaps my question is "how do I 'un-cache' model instances from a django queryset?"
EDIT: I'm actually using the results of the queryset to build a series of new objects. As such, I don't end up updating the queried-for objects at all.
Solution 1:[1]
So what I actually ended up doing is building something that you can 'wrap' a QuerySet in. It works by making a deepcopy of the QuerySet, using the slice syntax--e.g., some_queryset[15:45]--but then it makes another deepcopy of the original QuerySet when the slice has been completely iterated through. This means that only the set of Objects returned in 'this' particular slice are stored in memory.
class MemorySavingQuerysetIterator(object):
def __init__(self,queryset,max_obj_num=1000):
self._base_queryset = queryset
self._generator = self._setup()
self.max_obj_num = max_obj_num
def _setup(self):
for i in xrange(0,self._base_queryset.count(),self.max_obj_num):
# By making a copy of of the queryset and using that to actually access
# the objects we ensure that there are only `max_obj_num` objects in
# memory at any given time
smaller_queryset = copy.deepcopy(self._base_queryset)[i:i+self.max_obj_num]
logger.debug('Grabbing next %s objects from DB' % self.max_obj_num)
for obj in smaller_queryset.iterator():
yield obj
def __iter__(self):
return self
def next(self):
return self._generator.next()
So instead of...
for obj in SomeObject.objects.filter(foo='bar'): <-- Something that returns *a lot* of Objects
do_something(obj);
You would do...
for obj in MemorySavingQuerysetIterator(in SomeObject.objects.filter(foo='bar')):
do_something(obj);
Please note that the intention of this is to save memory in your Python interpreter. It essentially does this by making more database queries. Usually people are trying to do the exact opposite of that--i.e., minimize database queries as much as possible without regards to memory usage. Hopefully somebody will find this useful though.
Solution 2:[2]
What about using django core's Paginator and Page objects documented here:
https://docs.djangoproject.com/en/dev/topics/pagination/
Something like this:
from django.core.paginator import Paginator
from djangoapp.models import SomeModel
paginator = Paginator(SomeModel.objects.all(), 1000) # chunks of 1000
for page_idx in range(1, paginator.num_pages):
for row in paginator.page(page_idx).object_list:
# here you can do what you want with the row
print "done processing page %s" % page_idx
Solution 3:[3]
You can't use Model.objects.all().iterator() because it will fetch all the elements on your table at once. Neither can you use Model.objects.all()[offset:offset+pagesize], because it will cache the results. Either will exceed your memory limit.
I've tried to mix both solutions, and it worked:
offset = 0
pagesize = 1000
count = Model.objects.all().count()
while offset < count:
for m in Model.objects.all()[offset : offset + pagesize].iterator:
do_something with m
offset += pagesize
Change pagesize to fit your requirements, and optionally change the [offset : offset + pagesize] to the [offset * pagesize : (offset + 1) * pagesize] idiom if it fits you better. Also, of course, replace Model by your actual model name.
Solution 4:[4]
Many solutions implement sql OFFSET and LIMIT via slicing the queryset. As stefano notes, with larger datasets this becomes very inefficient. The proper way of handling this is to use server-side cursers to keep track of the OFFSET.
Native server-side cursor support is in the works for django. Until it's ready, here is a simple implementation if you are using postgres with the psycopg2 backend:
def server_cursor_query(Table):
table_name = Table._meta.db_table
# There must be an existing connection before creating a server-side cursor
if connection.connection is None:
dummy_cursor = connection.cursor() # not a server-side cursor
# Optionally keep track of the columns so that we can return a QuerySet. However,
# if your table has foreign keys, you may need to rename them appropriately
columns = [x.name for x in Table._meta.local_fields]
cursor = connection.connection.cursor(name='gigantic_cursor')) # a server-side
# cursor
with transaction.atomic():
cursor.execute('SELECT {} FROM {} WHERE id={}'.format(
', '.join(columns), table_name, id))
while True:
rows = cursor.fetchmany(1000)
if not rows:
break
for row in rows:
fields = dict(zip(columns, row))
yield Table(**fields)
See this blog post for a great explanation of memory issues from large queries in django.
Solution 5:[5]
There is a django snippet for this:
http://djangosnippets.org/snippets/1949/
It iterates over a queryset by yielding rows of smaller "chunks" of the original queryset. It ends up using significantly less memory while allowing you to tune for speed. I use it in one of my projects.
Solution 6:[6]
I'm continuing research and it kind of looks like I want to do the equivalent of an SQL OFFSET and LIMIT, which according to Django Doc's on Limiting Querysets means I want to use the slice syntax, e.g., SomeModel.objects.all()[15:25]
So now I'm thinking maybe something like this is what I'm looking for:
# Figure out the number of objects I can safely hold in memory
# I'll just say 100 for right now
number_of_objects = 100
count = SomeModel.objects.all().count():
for i in xrange(0,count,number_of_objects):
smaller_queryset = SomeModel.objects.all()[i:i+number_of_objects]
for model_instance in smaller_queryset:
do_something(model_instance)
By my reckoning this would make it so that smaller_queryset would never grow too large.
Solution 7:[7]
The following approach doesn't use an expensive database offset query and avoids calculating the page number, making it more efficient. Limitations specified in the docstring.
def queryset_pk_iterator(queryset, batch_size=1000):
"""
Iterator that splits the queryset into batches to reduce memory consumption.
Useful in cases where builtin .iterator() method of the queryset skips the "prefetch_related" optimization.
:param queryset: Queryset to iterate over. The supplied queryset must not specify order and limit/offset.
Queryset objects must have a monotonically increasing and ordering primary key.
:param batch_size: Size of the batches into which to split the queryset.
:return: iterator object
"""
pk = None
while True:
batch_queryset = queryset.order_by('pk')
if pk is not None:
batch_queryset = batch_queryset.filter(pk__gt=pk)
batch_queryset = batch_queryset[:batch_size]
obj = None
for obj in batch_queryset:
yield obj
if obj is None:
return
pk = obj.pk
Solution 8:[8]
The following is a queryset iterator that splits the queryset into chunks and is not much slower than the basic iterator (it will be a linear number of database queries, as opposed to 1, but it will only one query per 1,000 rows). This function pages by primary key, which is necessary for efficient implementation since offset is a linear time operation in most SQL databases. It has constant memory use, proportionate to the page size.
def queryset_iterator(queryset, page_size=1000):
if not queryset:
return
max_pk = queryset.order_by("-pk")[0].pk
# Scale the page size up by the average density of primary keys in the queryset
adjusted_page_size = int(page_size * max_pk / queryset.count())
pages = int(max_pk / adjusted_page_size) + 1
for page_num in range(pages):
lower = page_num * adjusted_page_size
page = queryset.filter(pk__gte=lower, pk__lt=lower+page_size)
for obj in page:
yield obj
Use looks like:
for obj in queryset_iterator(Model.objects.all()):
# do stuff
This code has three assumptions:
- Your primary keys are integers (this will not work for UUID primary keys).
- The primary keys of the queryset are at least somewhat uniformly distributed. If this is not true, the
adjusted_page_sizecan end up too large and you may get one or several massive pages as part of your iteration.
To give a sense of the overhead, I tested this on a Postgres table with 40,000 entries. The queryset_iterator adds about 80% to the iteration time vs raw iteration (2.2 seconds vs 1.2 seconds). That overhead does not vary substantially for page sizes between 200 and 10,000, though it starts going up below 200.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Chris W. |
| Solution 2 | Ethan Herdrick |
| Solution 3 | Neuron - Freedom for Ukraine |
| Solution 4 | |
| Solution 5 | Nick |
| Solution 6 | Chris W. |
| Solution 7 | dtatarkin |
| Solution 8 |
