'Why is mongorestore painfully slow?

I took a dump of a db having only 1 collection and two indices. The collection had roughly 6.5 million documents. When I tried to restore it, much to my surprise, the process was painstakingly slow. Some stats as as following:

Wed Aug 14 12:34:08.346         Progress: 333818/1378309050 0%  (bytes)
Wed Aug 14 12:34:11.077         Progress: 1530246/1378309050    0%  (bytes)
Wed Aug 14 12:34:26.177         Progress: 2714544/1378309050    0%  (bytes)
Wed Aug 14 12:34:30.145         Progress: 3355627/1378309050    0%  (bytes)
Wed Aug 14 12:34:34.504         Progress: 3895668/1378309050    0%  (bytes)
Wed Aug 14 12:34:53.246         Progress: 4334159/1378309050    0%  (bytes)
Wed Aug 14 12:34:56.318         Progress: 4963878/1378309050    0%  (bytes)
Wed Aug 14 12:34:59.545         Progress: 5617794/1378309050    0%  (bytes)
Wed Aug 14 12:35:08.042         Progress: 6923804/1378309050    0%  (bytes)
Wed Aug 14 12:35:16.424         Progress: 7342576/1378309050    0%  (bytes)
Wed Aug 14 12:35:23.168         Progress: 7987560/1378309050    0%  (bytes)
Wed Aug 14 12:35:29.703         Progress: 9295140/1378309050    0%  (bytes)
Wed Aug 14 12:35:38.582         Progress: 9943758/1378309050    0%  (bytes)
Wed Aug 14 12:35:43.574         Progress: 11128693/1378309050   0%  (bytes)
Wed Aug 14 12:35:46.008         Progress: 11982044/1378309050   0%  (bytes)
Wed Aug 14 12:35:50.134         Progress: 12421241/1378309050   0%  (bytes)
Wed Aug 14 12:35:54.548         Progress: 13166696/1378309050   0%  (bytes)
Wed Aug 14 12:35:58.152         Progress: 13837935/1378309050   1%  (bytes)

As it can be inferred from the data above, the total dump (in bson) is roughly 1.3 Gigs. And it is taking mongorestore a disappointing 110 seconds to restore 1% of it, which is 13 MB.

If anyone has an explanation for this, please let me know. I wish I'm doing something wrong inadvertently, since these figures are too slow in comparison to the standards of computing in this century.

EDIT


I ran the command again with following two options, hoping they would expedite the process:

--noobjcheck --noIndexRestore

But much to my surprise, the process is now more slow!. Here are some of the stats.

Wed Aug 14 13:13:53.750     going into namespace [temp_raw_tweet_db.tweets]
Wed Aug 14 13:14:00.258         Progress: 871186/1378309050 0%  (bytes)
Wed Aug 14 13:14:04.424         Progress: 2070390/1378309050    0%  (bytes)
Wed Aug 14 13:14:07.482         Progress: 2921304/1378309050    0%  (bytes)
Wed Aug 14 13:14:11.895         Progress: 3647526/1378309050    0%  (bytes)
Wed Aug 14 13:14:57.028         Progress: 4984815/1378309050    0%  (bytes)
Wed Aug 14 13:15:01.015         Progress: 6202286/1378309050    0%  (bytes)
Wed Aug 14 13:15:05.051         Progress: 6797800/1378309050    0%  (bytes)
Wed Aug 14 13:15:08.402         Progress: 8133842/1378309050    0%  (bytes)
Wed Aug 14 13:15:12.712         Progress: 8872607/1378309050    0%  (bytes)
Wed Aug 14 13:15:15.259         Progress: 9964997/1378309050    0%  (bytes)
Wed Aug 14 13:15:19.266         Progress: 14684145/1378309050   1%  (bytes)
Wed Aug 14 13:15:22.364         Progress: 16154567/1378309050   1%  (bytes)
Wed Aug 14 13:15:29.627         Progress: 16754495/1378309050   1%  (bytes)
Wed Aug 14 13:15:35.225         Progress: 17726291/1378309050   1%  (bytes)
Wed Aug 14 13:15:39.447         Progress: 18333902/1378309050   1%  (bytes)
Wed Aug 14 13:15:43.717         Progress: 19055308/1378309050   1%  (bytes)
Wed Aug 14 13:15:46.481         Progress: 19305912/1378309050   1%  (bytes)
Wed Aug 14 13:15:49.902         Progress: 20038391/1378309050   1%  (bytes)
Wed Aug 14 13:15:53.868         Progress: 20389108/1378309050   1%  (bytes)
Wed Aug 14 13:15:58.578         Progress: 21127296/1378309050   1%  (bytes)
Wed Aug 14 13:16:03.706         Progress: 21837923/1378309050   1%  (bytes)
Wed Aug 14 13:16:56.512         Progress: 22092536/1378309050   1%  (bytes)
Wed Aug 14 13:16:59.035         Progress: 22583057/1378309050   1%  (bytes)
Wed Aug 14 13:17:02.313         Progress: 22817464/1378309050   1%  (bytes)
Wed Aug 14 13:17:05.044         Progress: 23178521/1378309050   1%  (bytes)
Wed Aug 14 13:17:26.023         Progress: 23434010/1378309050   1%  (bytes)
Wed Aug 14 13:17:39.161         Progress: 23670222/1378309050   1%  (bytes)
Wed Aug 14 13:17:42.846         Progress: 24049639/1378309050   1%  (bytes)
Wed Aug 14 13:17:59.125         Progress: 24284177/1378309050   1%  (bytes)
Wed Aug 14 13:18:02.722         Progress: 24515270/1378309050   1%  (bytes)
Wed Aug 14 13:18:06.827         Progress: 25018013/1378309050   1%  (bytes)
Wed Aug 14 13:18:09.234         Progress: 25253850/1378309050   1%  (bytes)
Wed Aug 14 13:18:14.282         Progress: 25617812/1378309050   1%  (bytes)
Wed Aug 14 13:18:46.296         Progress: 25983107/1378309050   1%  (bytes)
Wed Aug 14 13:18:51.303         Progress: 26604320/1378309050   1%  (bytes)
Wed Aug 14 13:18:55.500         Progress: 26971559/1378309050   1%  (bytes)
Wed Aug 14 13:19:00.656         Progress: 27444735/1378309050   1%  (bytes)
Wed Aug 14 13:19:04.100         Progress: 28064675/1378309050   2%  (bytes)

It is taking roughly 4 minutes to go from 1% to 2%. Surely there is drastically something wrong here.



Solution 1:[1]

This is very old thread, but I was having similar problem recently, possibly for different reason, and came upon this question.

If you happen to be running mongo on AWS, be sure to use the right instance and volume types.

T-type instances have CPU credits, which will run out while running big mongorestore process. Your restore will start fast, then slow down to crawl... It never really stops, but it will take days to finish.

If you are attempting to save some money using magnetic EBS volume, that's bad idea either. SC1 is especially wrong, since it has similar credits for operations per second... mongorestore will burn through all your IOPs credits no matter how many you have in minutes, after which the speed will decrease to 3 operations per second and the restore can take WEEKS to finish.

I hope that helps someone.

Solution 2:[2]

Unfortunately, these numbers are not unusual. mongorestore on a collection of 300 million takes days.

You've basically got two options.

First, just take the long-running restore and run it overnight:

nohup mongorestore [args] &

Second, copy the files in your database directory itself (default /data/db) rather than using mongodump/mongorestore. If you can shut down your database for periods of time, this is best. Otherwise, you have to use file system snapshots or similar. See this for more the official Mongo blurb: http://docs.mongodb.org/manual/core/backups/

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2 Gert van den Berg