'Why using `make` with `-j` number of cores times 1.5?

I have seen some instructions saying that I should use make -j with my number of CPU cores times 1.5. For example this

So my question is, will this number create the best build performance or it's just a made-up number? If it is made up randomly, so how should I pass number to make -j to get the best speed?

Thanks!



Solution 1:[1]

I would say that it's a made-up number, but it makes sense.

A big part of compiling is just I/O waiting, if you use "make -j `nproc`" some of the cores will be just waiting for disk access (which normally is a sequential process).

So it makes sense to try to put those cores that are waiting to compile code by oversubscribing the number of parallel tasks.

This may or not work, depending on your number of cores, disk speed, disk cache, memory, cached files on memory, code size, precompiled headers, complexity of the language (C++ is a lot more complex to compile than Fortran or C), compilation dependencies, and a long etcetera.

In my experience a factor of 1.5 doesn't work well if you have 32 or more cores, too much pressure on the disk I/O. For 8 cores it works well.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Miguel