'Spark driver and executor in same machine

In an EMR cluster or any cluster is it possible that YARN allocate driver and executor in same EC2 instance? I want to know if driver can utilize the storage and processing power of 1 EC2 instance or some part of this instance will be used for serving other spark jobs running in the cluster. This could cause my driver to run out of memory.

I think Resource manager decide this based on the cluster resource availability?



Solution 1:[1]

In non-AWS EMR: driver and executorS can be on same machine / instance.

In AWS EMR: driver may run on Master Node or one of Core Instances --> so on same EC2 instance.

Incidentally, on EMR YARN like aspects run on Master Node.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 thebluephantom