'Is it better to increase process niceness or limit number of cores used on a shared system?

I've often worked in settings where several users have access to the same machine for computationally intensive tasks. The machines in question are standard linux machines (no docker/kubernetes or similar).

In some cases, there has been a policy to never use all the machine's cores for a given task to not hog resources. In other cases, the policy has been to run non-critical tasks with high nice values for the child processes, to allow priority to more important tasks.

Which is the better or more 'polite' way of allocating resources on a multi-user system? If neither is unilaterally better, what are the pros and cons of each?



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source