'Change python import behavior based on intended usage

I have a scientific code written in python.

The code works as a proper package to be imported and then the underlying functionality used. However, the workflow allows for the package to be used in two "modes":

One as a pre/post-processor to setup simulations, initial conditions, create grids (its a fluid flow solver), manipulate data etc. For this kind of usage, one is typically writing scripts and running them in serial on local machines. And this mode takes advantage of pythons amazing ecosystem, imports and utilizes other packages like scipy, etc...

The package then has another mode where its functionality is used for HPC applications, and runs on tens of thousands of processors with mpi4py. In this mode, I don't need many external packages, and I want to keep them to a minimum, and avoid unnecessary imports (this is question comes from an issue I am having on a cluster where numpy cannot import its random module above a certain number of processes).

My question is, is there a pythonic way to differentiate between these modes of use, and make my package heavy, feature rich, and all things great about python in one mode, and then "trim the fat" when used at run time?

If it spurs an idea, when I want the package to be light and not import unnecessary packages, it is always called from a single "run.py" script which basically bootstraps a simulation and executes it. Otherwise, the packages is imported from custom scripts or from the interpreter and I want the user to have access to all the available functionality.



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source